aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1908.04933
2968171141
Re-Pair is a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large scale data sets. As a solution for this problem we present, given a text of length @math whose characters are drawn from an integer alphabet, an @math time algorithm computing Re-Pair in @math bits of space including the text space, where @math is the number of terminals and non-terminals. The algorithm works in the restore model, supporting the recovery of the original input in the time for the Re-Pair computation with @math additional bits of working space. We give variants of our solution working in parallel or in the external memory model.
Re-Pair Computation Re-Pair is a grammar proposed by , who gave an algorithm computing it in expected linear time with @math words of working space, where @math is the number of non-terminals (produced by Re-Pair). This space requirement got improved by , who presented a linear time algorithm taking @math words on top of the rewriteable text space for a constant @math with @math . Subsequently, they improved their algorithm in @cite_6 to include the text space within the @math words of working space. However, they assume that the alphabet size @math is constant and @math , where @math is the machine word size. They also provide a solution for @math running in expected linear time. Recently, showed how to convert an arbitrary grammar (representing a text) in the Re-Pair grammar in compressed space, i.e., without decompressing the text. Combined with a grammar compression that can process the text in compressed space in a streaming fashion, this result leads to the first Re-Pair computation in compressed space.
{ "cite_N": [ "@cite_6" ], "mid": [ "2610200252" ], "abstract": [ "Re-Pair is an efficient grammar compressor that operates by recursively replacing high-frequency character pairs with new grammar symbols. The most space-efficient linear-time algorithm computing Re-Pair uses @math words on top of the re-writable text (of length @math and stored in @math words), for any constant @math ; in practice however, this solution uses complex sub-procedures preventing it from being practical. In this paper, we present an implementation of the above-mentioned result making use of more practical solutions; our tool further improves the working space to @math words (text included), for some small constant @math . As a second contribution, we focus on compact representations of the output grammar. The lower bound for storing a grammar with @math rules is @math bits, and the most efficient encoding algorithm in the literature uses at most @math bits and runs in @math time. We describe a linear-time heuristic maximizing the compressibility of the output Re-Pair grammar. On real datasets, our grammar encoding uses---on average---only @math more bits than the information-theoretic minimum. In half of the tested cases, our compressor improves the output size of 7-Zip with maximum compression rate turned on." ] }
1908.05004
2967978532
The subject of this report is the re-identification of individuals in the Myki public transport dataset released as part of the Melbourne Datathon 2018. We demonstrate the ease with which we were able to re-identify ourselves, our co-travellers, and complete strangers; our analysis raises concerns about the nature and granularity of the data released, in particular the ability to identify vulnerable or sensitive groups.
A common theme is that a remarkably small number of distinct points of information are necessary to make an individual unique---whenever one person's information is linked together into a detailed record of their events, a few known events are usually enough to identify them. De Montjoye @cite_5 showed that 80 were unique based on 3 points of time and location, even when neither times nor places were very precisely given.
{ "cite_N": [ "@cite_5" ], "mid": [ "2115240023" ], "abstract": [ "We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
In @cite_2 , we considered the problem of data aggregation and dissemination in IoT networks serving, for example, monitoring, sensing, or machine control applications. A key aspect of the IoT that differentiates it from classical wireless sensor networks (WSNs) is its heterogeneity. We therefore considered cases where nodes may take on different roles (for example, sensors, destinations, or transit nodes) for different applications, and where multiple applications with different demands may be present in the network simultaneously. Moreover, these demands can be more general than only collecting data and forwarding it to a single sink, as is usually the case for WSNs. Rather, data may be processed within the network (we take the specific case of aggregation), and may be disseminated to multiple sinks via multicast transmissions.
{ "cite_N": [ "@cite_2" ], "mid": [ "2786403027" ], "abstract": [ "Established approaches to data aggregation in wireless sensor networks (WSNs) do not cover the variety of new use cases developing with the advent of the Internet of Things (IoT). In particular, the current push toward fog computing, in which control, computation, and storage are moved to nodes close to the network edge, induces a need to collect data at multiple sinks, rather than the single sink typically considered in WSN aggregation algorithms. Moreover, for machine-to-machine communication scenarios, actuators subscribing to sensor measurements may also be present, in which case data should be not only aggregated and processed in-network but also disseminated to actuator nodes. In this paper, we present mixed-integer programming formulations and algorithms for the problem of energy-optimal routing and multiple-sink aggregation, as well as joint aggregation and dissemination, of sensor measurement data in IoT edge networks. We consider optimization of the network for both minimal total energy usage, and min-max per-node energy usage. We also provide a formulation and algorithm for throughput-optimal scheduling of transmissions under the physical interference model in the pure aggregation case. We have conducted a numerical study to compare the energy required for the two use cases, as well as the time to solve them, in generated network scenarios with varying topologies and between 10 and 40 nodes. Although aggregation only accounts for less than 15 of total energy usage in all cases tested, it provides substantial energy savings. Our results show more than 13 times greater energy usage for 40-node networks using direct, shortest-path flows from sensors to actuators, compared with our aggregation and dissemination solutions." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Network lifetime has been studied extensively in the context of WSNs since the early 2000's. A full review of the literature in this area is therefore beyond the scope of this paper; a recent survey can be found in @cite_20 . We will instead focus on the recent work that is most relevant to the current paper.
{ "cite_N": [ "@cite_20" ], "mid": [ "2571303926" ], "abstract": [ "Emerging technologies, such as the Internet of things, smart applications, smart grids and machine-to-machine networks stimulate the deployment of autonomous, selfconfiguring, large-scale wireless sensor networks (WSNs). Efficient energy utilization is crucially important in order to maintain a fully operational network for the longest period of time possible. Therefore, network lifetime (NL) maximization techniques have attracted a lot of research attention owing to their importance in terms of extending the flawless operation of battery-constrained WSNs. In this paper, we review the recent developments in WSNs, including their applications, design constraints and lifetime estimation models. Commencing with the portrayal of rich variety definitions of NL design objective used for WSNs, the family of NL maximization techniques is introduced and some design guidelines with examples are provided to show the potential improvements of the different design criteria" ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
There are numerous different definitions of network lifetime adopted in the literature @cite_20 . Some of these include that the network lifetime expires at the time instant a certain number (possibly as low as one) or proportion of nodes deplete their batteries, when the first data collection failure occurs, or when the specific node with the highest consumption rate runs out of energy. In @cite_20 , these definitions are classified into four categories depending on whether they are based on node lifetime, coverage and connectivity, transmission, or a combination of parameters.
{ "cite_N": [ "@cite_20" ], "mid": [ "2571303926" ], "abstract": [ "Emerging technologies, such as the Internet of things, smart applications, smart grids and machine-to-machine networks stimulate the deployment of autonomous, selfconfiguring, large-scale wireless sensor networks (WSNs). Efficient energy utilization is crucially important in order to maintain a fully operational network for the longest period of time possible. Therefore, network lifetime (NL) maximization techniques have attracted a lot of research attention owing to their importance in terms of extending the flawless operation of battery-constrained WSNs. In this paper, we review the recent developments in WSNs, including their applications, design constraints and lifetime estimation models. Commencing with the portrayal of rich variety definitions of NL design objective used for WSNs, the family of NL maximization techniques is introduced and some design guidelines with examples are provided to show the potential improvements of the different design criteria" ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
However, a problem with many of these definitions is that they are not application-centric. In practice, whether or not a network is functional depends on the specific application or applications which it serves. Some applications may require all nodes in the network to have remaining energy, while others may continue to operate correctly with only a few nodes working. The lifetime also depends on the capabilities of the network. For example, if the network can be reconfigured, the lifetime may be extended by switching configurations. This can be facilitated by the use of software defined networking @cite_22 , as well as support from cloud services that are capable of performing even demanding calculations to determine the best network configuration at any given time, without incurring an energy cost in the end devices.
{ "cite_N": [ "@cite_22" ], "mid": [ "2518827842" ], "abstract": [ "Network is dynamic and requires update in the operation. However, many confusions and problems can be caused by careless schedule in the update process. Although the problem has been investigated for many years in traditional networks where the control plane is distributed, software defined networking (SDN) brings new opportunities and solutions to this problem by the separation of control and data plane, as well as the centralized control. This paper makes a survey on the problems caused by network update, including forwarding loop, forwarding black hole, link congestion, network policy violation, etc., as well as the state-of-the-art SDN solutions to these problems. Furthermore, we summarize the network configuration strength and discuss the open issues of network update in the SDN paradigm." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
This is the approach we adopt in this paper, and we define valid configurations based on the demands of the applications present in the network along with the roles the various nodes play in these demands. As such, we will adopt a general definition of the network lifetime as the total time in which the network is operational. Since we consider a class of applications with data streams as their demands, this is most similar to the definition used in @cite_15 , where the network lifetime was defined as the number of sensory information task cycles achieved until the network ceases to be fully operational.
{ "cite_N": [ "@cite_15" ], "mid": [ "2160450532" ], "abstract": [ "A critical aspect of applications with wireless sensor networks is network lifetime. Power-constrained wireless sensor networks are usable as long as they can communicate sensed data to a processing node. Sensing and communications consume energy, therefore judicious power management and sensor scheduling can effectively extend network lifetime. To cover a set of targets with known locations when ground access in the remote area is prohibited, one solution is to deploy the sensors remotely, from an aircraft. The lack of precise sensor placement is compensated by a large sensor population deployed in the drop zone, that would improve the probability of target coverage. The data collected from the sensors is sent to a central node (e.g. cluster head) for processing. In this paper we propose un efficient method to extend the sensor network life time by organizing the sensors into a maximal number of set covers that are activated successively. Only the sensors from the current active set are responsible for monitoring all targets and for transmitting the collected data, while all other nodes are in a low-energy sleep mode. By allowing sensors to participate in multiple sets, our problem formulation increases the network lifetime compared with related work [M. ], that has the additional requirements of sensor sets being disjoint and operating equal time intervals. In this paper we model the solution as the maximum set covers problem and design two heuristics that efficiently compute the sets, using linear programming and a greedy approach. Simulation results are presented to verify our approaches." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Some work has been performed regarding network lifetime for networks with heterogeneous nodes, but only in a quite limited sense. For example, there is work based on the LEACH clustering protocol @cite_16 @cite_18 , where each node may be either an ordinary sensor node or a cluster head at different times. Examples of variations on LEACH that improve the network lifetime include @cite_7 , @cite_17 and @cite_10 , while @cite_12 presents a clustering routing protocol that considers both network lifetime and coverage. In @cite_4 , the nodes are also heterogeneous, however they may only be of two types: sensor nodes and relay nodes. This is also the case in @cite_13 , where network lifetime is defined as the time until the first node depletes its battery, and (unicast) routing is then optimised for each traffic flow to reach the sink.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_4", "@cite_7", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2909223649", "", "2050266688", "2913938994", "2106335692", "2061879708", "2800055840", "2219395474" ], "abstract": [ "The severe nature of underwater channel poses a great challenge for prolonging underwater acoustic sensor networks (UASNs) lifetime and achieving a reliable communication performance. Traditional approaches to improve the reliability such as automatic repeat request (ARQ) negatively affect the network lifetime (NL) due to energy dissipation caused by ARQ retransmission. A forward error correction (FEC) technique called fountain codes (FCs) can solve the energy efficiency problem of ARQ by transmitting both the original packet and some redundant packets to ensure a targeted reliability with few or no retransmissions. In this paper, we investigate performances of both traditional ARQ- and FC-based FEC methods in terms of NL, end-to-end delay, energy consumption, and frame error rate (FER) for UASNs. In this context, we abstract energy dissipation characteristics of conventional ARQ- and FC-based FEC method at the link-layer. We propose an integer linear programming (ILP) framework that maximizes the NL, which is operated on top of developed link-layer energy consumption models. Our results reveal that FC-based FEC methods can prolong the NL at a minimum of 16 while end-to-end delay, energy consumption, and FER can be reduced at least by 11 , 14 , and 9 as compared to classical ARQ, respectively.", "", "Abstract Energy is one of the scarcest resources in wireless sensor network (WSN). One fundamental way of conserving energy is judicious deployment of sensor nodes within the network area so that energy flow remains balanced throughout the network. This avoids the problem of occurrence of ‘energy holes’ and ensures prolonged network lifetime. We have first investigated the problem for enhancing network lifetime using homogeneous sensor nodes. From our observation it is revealed that energy imbalance in WSN occurs due to relaying of data from different parts of the network towards sink. So for improved energy balance instead of using only sensor nodes it is desirable to deploy relay nodes in addition to sensor nodes to manage such imbalance. We have also developed a location-wise pre-determined heterogeneous node deployment strategy based on the principle of energy balancing derived from this analysis, leading to an enhancement of network lifetime. Exhaustive simulation is performed primarily to measure the extent of achieving our design goal of enhancing network lifetime while attaining energy balancing and maintaining coverage. The simulation results also show that our scheme does not compromise with other network performance metrics such as end-to-end delay, packet loss, throughput while achieving the design goal. Finally all the results are compared with two competing schemes and the results confirm our scheme's supremacy in terms of both design performance metrics as well as network performance metrics.", "", "Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "A suitable clustering algorithm for grouping sensor nodes can increase the energy efficiency of WSNs. However, clustering requires additional overhead, such as cluster head selection and assignment, and cluster construction. This paper proposes a new regional energy aware clustering method using isolated nodes for WSNs, called Regional Energy Aware Clustering with Isolated Nodes (REAC-IN). In REAC-IN, CHs are selected based on weight. Weight is determined according to the residual energy of each sensor and the regional average energy of all sensors in each cluster. Improperly designed distributed clustering algorithms can cause nodes to become isolated from CHs. Such isolated nodes communicate with the sink by consuming excess amount of energy. To prolong network lifetime, the regional average energy and the distance between sensors and the sink are used to determine whether the isolated node sends its data to a CH node in the previous round or to the sink. The simulation results of the current study revealed that REAC-IN outperforms other clustering algorithms.", "Majority of wireless sensor networks (WSNs) clustering protocols in literature have focused on extending network lifetime and little attention has been paid to the coverage preservation as one of the QoS requirements along with network lifetime. In this paper, an algorithm is proposed to be integrated with clustering protocols to improve network lifetime as well as preserve network coverage in heterogeneous wireless sensor networks (HWSNs) where sensor nodes can have different sensing radii and energy attributes. The proposed algorithm works in proactive way to preserve network coverage and extend network lifetime by efficiently leveraging mobility to optimize the average coverage rate using only the nodes that are already deployed in the network. Simulations are conducted to validate the proposed algorithm by showing improvement in network lifetime and enhanced full coverage time with less energy consumption", "Wireless sensor network (WSN) brings a new paradigm of real-time embedded systems with limited computation, communication, memory, and energy resources that are being used for huge range of applications where the traditional infrastructure-based network is mostly infeasible. The sensor nodes are densely deployed in a hostile environment to monitor, detect, and analyze the physical phenomenon and consume considerable amount of energy while transmitting the information. It is impractical and sometimes impossible to replace the battery and to maintain longer network life time. So, there is a limitation on the lifetime of the battery power and energy conservation is a challenging issue. Appropriate cluster head (CH) election is one such issue, which can reduce the energy consumption dramatically. Low energy adaptive clustering hierarchy (LEACH) is the most famous hierarchical routing protocol, where the CH is elected in rotation basis based on a probabilistic threshold value and only CHs are allowed to send the information to the base station (BS). But in this approach, a super-CH (SCH) is elected among the CHs who can only send the information to the mobile BS by choosing suitable fuzzy descriptors, such as remaining battery power, mobility of BS, and centrality of the clusters. Fuzzy inference engine (Mamdani’s rule) is used to elect the chance to be the SCH. The results have been derived from NS-2 simulator and show that the proposed protocol performs better than the LEACH protocol in terms of the first node dies, half node alive, better stability, and better lifetime." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Some work in the literature also considers in-network processing. In @cite_8 , data aggregation trees are constructed and scheduled, and the network can be reconfigured, in that different trees can be used in different time periods. This work again uses the traditional WSN model of many homogeneous sensor nodes all sending measurements to a single sink. The scenario considered in @cite_23 focuses on a machine-to-machine communication application similar to the one we consider, including the presence of edge nodes in the network. However, there, the problem addressed is that of data placement on these edge nodes in order to maximize the network lifetime under latency constraints. Routing is performed by selecting the paths that yield the maximum lifetime, defined as the time until any node runs out of energy; reconfiguration of the network as we propose in this paper is not considered.
{ "cite_N": [ "@cite_23", "@cite_8" ], "mid": [ "2811107380", "2945696784" ], "abstract": [ "Maintaining critical data access latency requirements and ensuring efficient energy consumption on the field devices are two important challenges of Industry 4.0. The traditional, centralized industrial networks which offer primitive data distribution functions, might be incapable of meeting such strict requirements. In this paper, in order to overcome this issue, we focus on a network of resource constraint IoT devices, and exploit the presence of a few more capable Edge nodes that act as distributed local data storing proxies for all IoT devices. We show that, given the proxy locations in the network, the initial energy supplies of the nodes, a pattern of data requests from IoT devices, and the maximum access latency that consumer nodes can tolerate, the problem of finding how to distribute the data on the Edge nodes maximizing the network lifetime is computationally hard. We design an offline centralized heuristic algorithm for identifying which paths in the network the data should follow and on which proxies they should be cached, in order to meet the data access latency constraint and to maximize the network lifetime. We implement the method and evaluate its performance using a testbed of IEEE 802.15.4-enabled network nodes. We demonstrate that the proposed heuristic (i) guarantees data access latency below the given threshold, and (ii) performs well in terms of network lifetime with respect to a theoretically optimal solution.", "Abstract Data gathering is a basic requirement in many applications of Wireless Sensor Networks (WSNs). In tree based data gathering, Data Aggregation Tree (DAT) is constructed by the sink or by the nodes in a distributed manner. In this paper, we study the problem of enhancing Network Lifetime (NL) using hybrid DAT construction methods. In hybrid methods of DAT construction, the sink and the nodes collaboratively construct the DAT. We propose three algorithms for Scheduling DATs using Local Heuristics with Ordering (SDLHO), with Randomization (SDLHR) and with Tree factor (SDLHT) techniques.These techniques avoid disparity in energy levels of the nodes and increase the survivability of the network. In addition, to address imperfect link quality, we propose an algorithm for Scheduling DATs using Local Heuristics with Ordering based on Link Quality (SDLHO-LQ). Rigorous simulation results demonstrate the efficacy of the proposed algorithms; and their ability to scaleup to suit deployment of applications in harsh regions. Further, their performances evaluated to quantify the amount of enhancements of NL with the existing state of art is propitious to suit the distributed environments." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
A few general frameworks for maximizing network lifetime have also been developed. In @cite_25 , the focus is on network deployment, specifically the initial energy allocated to each node. Once again nodes are homogeneous, with all nodes collecting data and transmitting it to their neighbors, and the definition of network lifetime is the time until the first sensor depletes its battery. A more general definition of network lifetime is used in @cite_1 , which applies a framework based on channel states aimed at developing medium access protocols for improved lifetime. However, nodes have fixed roles and only a single application is considered.
{ "cite_N": [ "@cite_1", "@cite_25" ], "mid": [ "2099641228", "2138122413" ], "abstract": [ "We derive a general formula for the lifetime-of wireless sensor networks which holds independently of the underlying network model including network architecture and protocol, data collection initiation, lifetime definition, channel fading characteristics, and energy consumption model. This formula identifies two key parameters at the physical layer that affect the network lifetime: the channel state and the residual energy of sensors. As a result, it provides not only a gauge for performance evaluation of sensor networks but also a guideline for the design of network protocols. Based on this formula, we propose a medium access control protocol that exploits both the channel state information and the residual energy information of individual sensors. Referred to as the max-min approach, this protocol maximizes the minimum residual energy across the network in each data collection.", "In multihop wireless sensor networks that are often characterized by many-to-one (convergecast) traffic patterns, problems related to energy imbalance among sensors often appear. Sensors closer to a data sink are usually required to forward a large amount of traffic for sensors farther from the data sink. Therefore, these sensors tend to die early, leaving areas of the network completely unmonitored and reducing the functional network lifetime. In our study, we explore possible sensor network deployment strategies that maximize sensor network lifetime by mitigating the problem of the hot spot around the data sink. Strategies such as variable-range transmission power control with optimal traffic distribution, mobile-data-sink deployment, multiple-data-sink deployment, nonuniform initial energy assignment, and intelligent sensor relay deployment are investigated. We suggest a general model to analyze and evaluate these strategies. In this model, we not only discover how to maximize the network lifetime given certain network constraints but also consider the factor of extra costs involved in more complex deployment strategies. This paper presents a comprehensive analysis on the maximum achievable sensor network lifetime for different deployment strategies, and it also provides practical cost-efficient sensor network deployment guidelines." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
To preserve spatial information within tensors in the dimensionality reduction methods, @cite_17 introduces the Tucker LPP (TLPP) which is LPP based on the Tucker decomposition to analyze the high-dimensional data and has the exponential increase in storage complexity as the number of modes increases.
{ "cite_N": [ "@cite_17" ], "mid": [ "16346066" ], "abstract": [ "Over the past few years, some embedding methods have been proposed for feature extraction and dimensionality reduction in various machine learning and pattern classification tasks. Among the methods proposed are Neighborhood Preserving Embedding (NPE), Locality Preserving Projection (LPP) and Local Discriminant Embedding (LDE) which have been used in such applications as face recognition and image video retrieval. However, although the data in these applications are more naturally represented as higher-order tensors, the embedding methods can only work with vectorized data representations which may not capture well some useful information in the original data. Moreover, high-dimensional vectorized representations also suffer from the curse of dimensionality and the high computational demand. In this paper, we propose some novel tensor embedding methods which, unlike previous methods, take data directly in the form of tensors of arbitrary order as input. These methods allow the relationships between dimensions of a tensor representation to be efficiently characterized. Moreover, they also allow the intrinsic local geometric and topological properties of the manifold embedded in a tensor space to be naturally estimated. Furthermore, they do not suffer from the curse of dimensionality and the high computational demand. We demonstrate the effectiveness of the proposed tensor embedding methods on a face recognition application and compare them with some previous methods. Extensive experiments show that our methods are not only more effective but also more efficient." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The other existing dimensionality reduction method which embeds the TT subspace, is the tensor train neighbourhood preserving embedding (TTNPE) @cite_2 . TTNPE solves the exponential explosion on the complexity with the number of modes increasing. However, its robustness to the extreme outliers remains as a concern. Therefore, a dimensionality reduction method for tensors with a large number of modes or dimensions is demanded to propose on the TT subspace and the capability of reducing the sensitivity to the extreme outliers. Our method TTPUDR is thus developed with all the aspects.
{ "cite_N": [ "@cite_2" ], "mid": [ "2789880577" ], "abstract": [ "Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors. Associated with such a construction is a notion of Tensor Train subspace and in this paper we propose a TT-PCA algorithm for estimating this structured subspace from the given data. By maintaining low rank tensor structure, TT-PCA is more robust to noise comparing with PCA or Tucker-PCA. This is borne out numerically by testing the proposed approach on the Extended YaleFace Dataset B." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
We denote the left unfolding operation @cite_2 of @math as the matrix @math where the last mode of the tensor becomes the column indices of the left unfolding matrix and the rest of the modes are the row indices. Similarly, for the right unfolding operation, denoting it as @math . Also, the vectorization of a tensor is denoted by @math . The F-norm of a tensor can be defined as the @math -norm of its vectorization, i.e., @math , which considers all the elements @math as an entire group and preserves the general spatial relations between elements. Besides @math -norm of a tensor is computed as @math which treats each elements separately and can probably cause the spatial information loss.
{ "cite_N": [ "@cite_2" ], "mid": [ "2789880577" ], "abstract": [ "Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors. Associated with such a construction is a notion of Tensor Train subspace and in this paper we propose a TT-PCA algorithm for estimating this structured subspace from the given data. By maintaining low rank tensor structure, TT-PCA is more robust to noise comparing with PCA or Tucker-PCA. This is borne out numerically by testing the proposed approach on the Extended YaleFace Dataset B." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The tensor-train (TT) decomposition is designed for large-scale data analysis @cite_3 . It can achieve a simpler implementation than the tree-type decomposition algorithms @cite_18 which are developed to reduce the storage complexity and avoid the local minima.
{ "cite_N": [ "@cite_18", "@cite_3" ], "mid": [ "1995406764", "1993482030" ], "abstract": [ "For @math -dimensional tensors with possibly large @math , an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.", "A simple nonrecursive form of the tensor decomposition in @math dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
For most of the applications, in order to achieve the computational efficiency and be less information redundant, the researchers often restrict the tensor ranks to be smaller than the size of their corresponding tensor mode, i.e., @math for @math @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2789880577" ], "abstract": [ "Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors. Associated with such a construction is a notion of Tensor Train subspace and in this paper we propose a TT-PCA algorithm for estimating this structured subspace from the given data. By maintaining low rank tensor structure, TT-PCA is more robust to noise comparing with PCA or Tucker-PCA. This is borne out numerically by testing the proposed approach on the Extended YaleFace Dataset B." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
Given a set of vectorial training data @math and an affinity matrix of locality similarity @math , LPP intends to seek for a linear projection @math from @math to @math such that the following optimization problem is solved to minimize the locality preserving criterion set as the objective function. The widely used affinity @math is based on the graph of the neighborhood information in the data as follows @cite_5 . where @math is a positive parameter and @math denotes the @math -nearest neighborhood of @math .
{ "cite_N": [ "@cite_5" ], "mid": [ "2154872931" ], "abstract": [ "Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
LPP is a classical dimensionality reduction method and has been applied in many real cases, for example, computer vision @cite_8 . It captures the local information among the data points and reduces more sensitivity to the outliers than PCA. However, we do observe the following shortcomings of LPP: LPP is designed for vectorial data. When it is applied to multi-dimensional data, i.e, tensors, there exists potential loss of spatial information. The existing tensor locality preserving projections, i.e., the Tucker LPP (TLPP) @cite_17 embeds the tensor space with a high storage complexity at @math . Theoretically, LPP cannot work for the cases where the data dimension is greater than the number of samples. Although this can be avoided by a trick in which one first projects the data onto its PCA subspace, then implements LPP in this subspace http: www.cad.zju.edu.cn home dengcai Data code LPP.m , this would not work well for ultra-dimensional data with a fairly large dataset as a singular value decomposition (SVD) becomes a bottleneck.
{ "cite_N": [ "@cite_17", "@cite_8" ], "mid": [ "16346066", "2089035607" ], "abstract": [ "Over the past few years, some embedding methods have been proposed for feature extraction and dimensionality reduction in various machine learning and pattern classification tasks. Among the methods proposed are Neighborhood Preserving Embedding (NPE), Locality Preserving Projection (LPP) and Local Discriminant Embedding (LDE) which have been used in such applications as face recognition and image video retrieval. However, although the data in these applications are more naturally represented as higher-order tensors, the embedding methods can only work with vectorized data representations which may not capture well some useful information in the original data. Moreover, high-dimensional vectorized representations also suffer from the curse of dimensionality and the high computational demand. In this paper, we propose some novel tensor embedding methods which, unlike previous methods, take data directly in the form of tensors of arbitrary order as input. These methods allow the relationships between dimensions of a tensor representation to be efficiently characterized. Moreover, they also allow the intrinsic local geometric and topological properties of the manifold embedded in a tensor space to be naturally estimated. Furthermore, they do not suffer from the curse of dimensionality and the high computational demand. We demonstrate the effectiveness of the proposed tensor embedding methods on a face recognition application and compare them with some previous methods. Extensive experiments show that our methods are not only more effective but also more efficient.", "Locality preserving projection (LPP) is a manifold learning method widely used in pattern recognition and computer vision. The face recognition application of LPP is known to suffer from a number of problems including the small sample size (SSS) problem, the fact that it might produce statistically identical transform results for neighboring samples, and that its classification performance seems to be heavily influenced by its parameters. In this paper, we propose three novel solution schemes for LPP. Experimental results also show that the proposed LPP solution scheme is able to classify much more accurately than conventional LPP and to obtain a classification performance that is only little influenced by the definition of neighbor samples." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The TT decomposition with a smaller storage complexity at @math has been recently applied in the tensor train neighborhood preserving embedding (TTNPE) @cite_2 @cite_0 . Nevertheless, the actual algorithm in TTNPE is only implemented as a TT approximation to the pseudo PCA. To the best of our knowledge, there is no existing dimensionality reduction method which can directly process the tensor data with less storage complexity, i.e., using the TT decomposition in algorithms.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2963465654", "2789880577" ], "abstract": [ "In this paper, we propose a tensor train neighborhood preserving embedding (TTNPE) to embed multidimensional tensor data into low-dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate a novel tradeoff gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior tradeoff in classification, computation, and dimensionality reduction in MNIST handwritten digits, Weizmann face datasets, and financial market datasets.", "Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors. Associated with such a construction is a notion of Tensor Train subspace and in this paper we propose a TT-PCA algorithm for estimating this structured subspace from the given data. By maintaining low rank tensor structure, TT-PCA is more robust to noise comparing with PCA or Tucker-PCA. This is borne out numerically by testing the proposed approach on the Extended YaleFace Dataset B." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
Fingerprinting has been a broadly studied method of indoor positioning @cite_12 . More particularly, RSSI has been the main type of signal that is used @cite_12 . It has been only a few years since the transfering of fingerprinting techniques in the outdoor world, and in particular in LPWAN settings. In a recent study, @cite_1 have made three fingerprinting datasets of Low Power Wide Area Networks publicly available. One of these datasets contains LoRaWAN RSSI measurements collected in the urban area of the city of Antwerp, in Belgium. The motivation for making the datasets publicly available was to provide the global research community with a benchmark tool to evaluate fingerprinting algorithms for LPWAN standards. In that work, the utilization of the presented LoRaWAN dataset by a k Nearest Neighbours fingerprinting method was exemplified, achieving a mean localization error of 398 meters. To the best of our knowledge, there is no follow up study so far, which utilizes this dataset.
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "2791401550", "2607595839" ], "abstract": [ "Because of the increasing relevance of the Internet of Things and location-based services, researchers are evaluating wireless positioning techniques, such as fingerprinting, on Low Power Wide Area Network (LPWAN) communication. In order to evaluate fingerprinting in large outdoor environments, extensive, time-consuming measurement campaigns need to be conducted to create useful datasets. This paper presents three LPWAN datasets which are collected in large-scale urban and rural areas. The goal is to provide the research community with a tool to evaluate fingerprinting algorithms in large outdoor environments. During a period of three months, numerous mobile devices periodically obtained location data via a GPS receiver which was transmitted via a Sigfox or LoRaWAN message. Together with network information, this location data is stored in the appropriate LPWAN dataset. The first results of our basic fingerprinting implementation, which is also clarified in this paper, indicate a mean location estimation error of 214.58 m for the rural Sigfox dataset, 688.97 m for the urban Sigfox dataset and 398.40 m for the urban LoRaWAN dataset. In the future, we will enlarge our current datasets and use them to evaluate and optimize our fingerprinting methods. Also, we intend to collect additional datasets for Sigfox, LoRaWAN and NB-IoT.", "The widely applied location-based services require a high standard for positioning technology. Currently, outdoor positioning has been a great success; however, indoor positioning technologies are in the early stages of development. Therefore, this paper provides an overview of indoor fingerprint positioning based on Wi-Fi. First, some indoor positioning technologies, especially the Wi-Fi fingerprint indoor positioning technology, are introduced and discussed. Second, some evaluation metrics and influence factors of indoor fingerprint positioning technologies based on Wi-Fi are introduced. Third, methods and algorithms of fingerprint indoor positioning technologies are analyzed, classified, and discussed. Fourth, some widely used assistive positioning technologies are described. Finally, conclusions are drawn and future possible research interests are discussed. It is hoped that this research will serve as a stepping stone for those interested in advancing indoor positioning." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
@cite_0 , have evaluated experimentally RRS and TDoA ranging positioning methods using a LoRaWAN network, reporting median errors of 1250 and 200 meters for RRS and TDoA respectively. Other works @cite_2 , @cite_5 , have focused on rather specific settings over which they evaluate positioning methods. These works @cite_2 , @cite_5 , present experiments in car parking settings, testing in confined areas, with a placement of basestations that was adapted to their use-case, presenting a low error which ranges at the scale of a few tens of meters.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_2" ], "mid": [ "2901105864", "2902596366", "2900294995" ], "abstract": [ "This paper experimentally compares the positioning accuracy of TDoA-based and RSS-based localization in a public outdoor LoRa network in the Netherlands. The performance of different Received Signal Strength (RSS)-based approaches (proximity, centroid, map matching,…) is compared with Time-Difference-of-Arrival (TDoA) performance. The number of RSS and TDoA location updates and the positioning accuracy per spreading factor (SF) is assessed, allowing to select the optimal SF choice for the network. A road mapping filter is applied to the raw location estimates for the best algorithms and SFs. RSS-based approaches have median and maximal errors that are limited to 1000 m and 2000 m respectively, using a road mapping filter. Using the same filter, TDoA-based approaches deliver median and maximal errors in the order of 150 m and 350 m respectively. However, the number of location updates per time unit using SF7 is around 10 times higher for RSS algorithms than for the TDoA algorithm.", "In this paper, we present an RSSI-based localization solution that merely relies on the LoRa technology without requiring any anchors. It is primarily designed for large used car dealerships. Cars to be tracked are fitted with battery- powered tags that send each other plain LoRa messages and then report over LoRaWAN the observed RSSI to the server. The server estimates tag coordinates using multi-dimensional scaling and a novel corrective transformation based on tag clustering. We designed a low-cost tag for RSSI measurement and an energy efficient solution for managing a mesh of 1000 tags on one site using time division multiplexing. The tags are fully controlled and synchronized by the server. To stay synchronized for several hours until a next downlink the tags estimate and compensate the real-time clock error. Large-scale tests are yet to be performed, but lab-based and small-scale tests in real conditions show a maximal error of 8 meters and a battery lifetime of 5 years.", "Positioning is an essential element in most Internet of Things (IoT) applications. Global Positioning System (GPS) chips have high cost and power consumption, making it unsuitable for long-range (LoRa) and low-power IoT devices. Alternatively, low-power wide-area (LPWA) signals can be used for simultaneous positioning and communication. We summarize previous studies related to LoRa signal-based positioning systems, including those addressing proximity, a path loss model, time difference of arrival (TDoA), and fingerprint positioning methods. We propose a LoRa signal-based positioning method that uses a fingerprint algorithm instead of a received signal strength indicator (RSSI) proximity or TDoA method. The main objective of this study was to evaluate the accuracy and usability of the fingerprint algorithm for large areas in the real world. We estimated the locations using probabilistic means based on three different algorithms that use interpolated fingerprint RSSI maps. The average accuracy of the three proposed algorithms in our experiments was 28.8 m. Our method also reduced the battery consumption significantly compared with that of existing GPS-based positioning methods." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
General purpose fingerprinting methods in LPWAN settings have been presented and discussed in recent works @cite_6 , @cite_7 . @cite_6 have utilized a Sigfox dataset to apply a kNN algorithm, and selected among a variety of distance metrics and data representations the best performing ones, resulting with a mean positioning error of 340 meters. In addition, in our previous work @cite_7 , we have moved further in analysing the same Sigfox dataset, by tuning relevant parameters of the discussed preprocessing schemes, reducing the mean error to 298 meters. As was done in our previous work @cite_7 , in order to facilitate the comparability of results, we share the train validation test sets used in the current work as well.
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "2968588162", "2901188462" ], "abstract": [ "Fingerprinting techniques, which are a common method for indoor localization, have been recently applied with success into outdoor settings. Particularly, the communication signals of Low Power Wide Area Networks (LPWAN) such as Sigfox, have been used for localization. In this rather recent field of study, not many publicly available datasets, which would facilitate the consistent comparison of different positioning systems, exist so far. In the current study, a published dataset of RSSI measurements on a Sigfox network deployed in Antwerp, Belgium is used to analyse the appropriate selection of preprocessing steps and to tune the hyperparameters of a kNN fingerprinting method. Initially, the tuning of hyperparameter k for a variety of distance metrics, and the selection of efficient data transformation schemes, proposed by relevant works, is presented. In addition, accuracy improvements are achieved in this study, by a detailed examination of the appropriate adjustment of the parameters of the data transformation schemes tested, and of the handling of out of range values. With the appropriate tuning of these factors, the achieved mean localization error was 298 meters, and the median error was 109 meters. To facilitate the reproducibility of tests and comparability of results, the code and train validation test split used in this study are available.", "The Internet of Things (IoT) has caused the modern society to connect everything in our environment to a network. In a myriad of IoT applications, smart devices need to be located. This can easily be done by satellite based receivers. However, there are more energy-efficient localization technologies, especially in Low Power Wide Area Networks (LPWAN). In this research, we discuss the accuracy of an outdoor fingerprinting technique using a large outdoor Sigfox dataset which is openly available. A kNN (k Nearest Neighbors) algorithm is applied to our fingerprinting database. 31 different distance functions and four RSS data representations are evaluated. Our analysis shows that a Sigfox transmitter can be located with a mean estimation error of 340 meters." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Containerizing control applications has been discussed in recent literature. @cite_6 , for instance, presented the concept of containerization of full control applications as a means to decouple the hardware and software life-cycles of an industrial automation system. Due to the performance overhead in hardware virtualization, the authors state that OS-level virtualization is a suitable technique to cope with automation system timing demands. They propose two approaches to migrate a control application into containers on top of a patched real-time Linux-based operating system: A given system is decomposed into subsystems, where a set of sub-units performs a localized computation, which then is actuated through a global decision maker. Devices are defined as a set of processes, where each process is an isolated standalone solution with a shared communication stack. Based on this, systems are divided into specialized modules, allowing a granular development and update strategy. The authors demonstrate the feasibility of real-time applications in conjunction with containerization, even though they express concern on the maturity of the technical solution presented.
{ "cite_N": [ "@cite_6" ], "mid": [ "2487718634" ], "abstract": [ "Virtualization is entering the world of real-time embedded systems. Industrial automation systems, in particular, can benefit from what virtualization has to offer: flexible consolidation of applications on different hardware types and scales, extending the life-time of legacy code or decoupling software and hardware lifecycles. However, such systems require a light-weight virtualization technology in order to be able to maintain real-time behavior while dealing with real-time data. This paper sets out to investigate the applicability of container-based OS-level virtualization technology to industrial automation systems. To this end, we provide insights into the capabilities of containers to achieve flexible consolidation and easy migration of industrial automation applications as well as into the container technology readiness with respect to the fundamental requirement of industrial automation systems, namely performing timely control actions based on real-time data. Moreover, we provide an empirical study of the performance overhead introduced by containers based on micro-benchmarks that capture the characteristics of targeted industrial automation applications." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Goldschmidt and Hauk-Stattelmann in @cite_9 perform benchmark tests on modularized industrial Programmable Logic Controller (PLC) applications. This analysis analyzes the impact of container-based virtualization on real-time constraints. As there is no solution for legacy code migration of PLCs, the migration to application containers could extend a system's lifetime beyond the physical device's limits. Even though tests showed worst-case latencies of the order of @math on Intel-based hosts, the authors argue that the container engines may be stripped down and optimized for real-time execution. In a follow-up work, @cite_28 , a possible multi-purpose architecture was described and tested in a real-world use case. The results show worst case latencies in the range of @math for a Raspberry PI single-board computer, making the solution viable for cycle times in the range of @math to @math . The authors state that topics such as memory overhead, containers' restricted access and problems due to technology immaturity are still to be investigated.
{ "cite_N": [ "@cite_28", "@cite_9" ], "mid": [ "2792724737", "2534821673" ], "abstract": [ "Abstract Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines.", "Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present a an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
@cite_0 address architectural details not discussed in @cite_9 and @cite_28 . These additions include the definite run-time environment and how deterministic communication of containers and field devices may be achieved in a novel container-based architecture. They proposed a Linux-based solution as host operating system, including both the single kernel preemption-focused PREEMPT-RT patch and the co-kernel oriented Xenomai. With this patch, the approach exhibits better predictability, although it suffers from security concerns introduced by exposed system files required by Xenomai. For this reason, they suggested limiting its application for safety-critical code execution. They analyzed and discussed inter-process messaging in detail, focusing on the specific properties needed in real-time applications. Finally, they implemented an orchestration run-time managing intra-container communication and showed that task times as low as @math are possible.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_28" ], "mid": [ "2885043734", "2534821673", "2792724737" ], "abstract": [ "Internet-of-Things and cyber-pyhsical systems are gaining ongoing importance in the field of industrial automation. At the same time, the production becomes more and more flexible. Therefore, the main parts, such as Programmable Logic Controllers (PLCs) need to be flexible as well. However, today's PLCs are monolithic and distributed as a single deployable piece of software. In this paper we propose an architecture that uses containers to modularize real-time control applications, messaging for communication and a hardware abstraction layer to improve maintainability, re-usability and flexibility. Using a prototypical implementation of the architecture, we validate the feasibility of this approach through a benchmark", "Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present a an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines.", "Abstract Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines." ] }
1908.04574
2968985217
The domain name resolution into IP addresses can significantly delay connection establishments on the web. Moreover, the common use of recursive DNS resolvers presents a privacy risk as they can closely monitor the user's browsing activities. In this paper, we present a novel HTTP response header allowing web server to provide their clients with relevant DNS records. Our results indicate, that this resolver-less DNS mechanism allows user agents to save the DNS lookup time for subsequent connection establishments. We find, that this proposal saves at least 80ms per DNS lookup for the one percent of users having the longest round-trip times towards their recursive resolver. Furthermore, our proposal decreases the number of DNS lookups and thus improves the privacy posture of the user towards the used recursive resolver. Comparing the security guarantees of the traditional DNS to our proposal, we find that resolver-less DNS achieves at least the same security properties. In detail, it even improves the user's resilience against censorship through tampered DNS resolvers.
The DNS Anonymity Service combines a broadcast mechanism for popular DNS records with an anonymity network to conduct additional DNS lookups @cite_15 . Unlike our proposal, the DNS Anonymity Service causes additional network traffic for downloading the broadcasted DNS records and suffers additional network latency when the client resolves hostnames via the anonymity network. In total, the performance gains of this clean-slate approach are vague as they depend on the user's browsing behavior. Furthermore, this approach does not integrate well into the existing DNS and requires additional Internet infrastructure to be deployed.
{ "cite_N": [ "@cite_15" ], "mid": [ "37081517" ], "abstract": [ "We propose a dedicated DNS Anonymity Service which protects users' privacy. The design consists of two building blocks: a broadcast scheme for the distribution of a \"top list\" of DNS hostnames, and low-latency Mixes for requesting the remaining hostnames unobservably. We show that broadcasting the 10,000 most frequently queried hostnames allows zero-latency lookups for over 80 of DNS queries at reasonable cost. We demonstrate that the performance of the previously proposed Range Queries approach severely suffers from high lookup latencies in a real-world scenario." ] }
1908.04574
2968985217
The domain name resolution into IP addresses can significantly delay connection establishments on the web. Moreover, the common use of recursive DNS resolvers presents a privacy risk as they can closely monitor the user's browsing activities. In this paper, we present a novel HTTP response header allowing web server to provide their clients with relevant DNS records. Our results indicate, that this resolver-less DNS mechanism allows user agents to save the DNS lookup time for subsequent connection establishments. We find, that this proposal saves at least 80ms per DNS lookup for the one percent of users having the longest round-trip times towards their recursive resolver. Furthermore, our proposal decreases the number of DNS lookups and thus improves the privacy posture of the user towards the used recursive resolver. Comparing the security guarantees of the traditional DNS to our proposal, we find that resolver-less DNS achieves at least the same security properties. In detail, it even improves the user's resilience against censorship through tampered DNS resolvers.
DNS prefetching describes a popular performance optimization where browsers start resolving the hostname of hyperlinks before the user clicks on them. However, privacy research on this mechanism indicates severe privacy problems. For example, it was shown that the recursive resolver could even infer the search terms the user entered into the search engine based on DNS prefetching @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "1512251782" ], "abstract": [ "A recent trend in optimizing Internet browsing speed is to optimistically pre-resolve (or prefetch) DNS resolutions. While the practical benefits of doing so are still being debated, this paper attempts to raise awareness that current practices could lead to privacy threats that are ripe for abuse. More specifically, although the adoption of several browser optimizations have already raised security concerns, we examine how prefetching amplifies disclosure attacks to a degree where it is possible to infer the likely search terms issued by clients using a given DNS resolver. The success of these inference attacks relies on the fact that prefetching inserts a significant amount of context into a resolver's cache, allowing an adversary to glean far more detailed insights than when this feature is turned off." ] }
1908.04727
2967991851
A in the unit @math -cube is a set @math such that for every @math and @math in @math we either have @math for all @math , or @math for all @math . We consider subsets, @math , of the unit @math -cube @math that satisfy [ card (A C) k, , for all chains , C [0,1]^n , , ] where @math is a fixed positive integer. We refer to such a set @math as a @math -antichain. We show that the @math -dimensional Hausdorff measure of a @math -antichain in @math is at most @math and that the bound is asymptotically sharp. Moreover, we conjecture that there exist @math -antichains in @math whose @math -dimensional Hausdorff measure equals @math and we verify the validity of this conjecture when @math .
When @math this conjecture is clearly true, and when @math it is observed in @cite_0 that the validity of Conjecture is an immediate consequence of the following, well-known, result. Recall that a singular function @math is a strictly decreasing function whose derivative equals zero almost everywhere.
{ "cite_N": [ "@cite_0" ], "mid": [ "2903599423" ], "abstract": [ "A set @math is called an antichain (resp. antichain) if it does not contain two distinct elements @math and @math satisfying @math (resp. @math ) for all @math . We show that the Hausdorff dimension of a weak antichain @math in the @math -dimensional unit cube @math is at most @math and that the @math -dimensional Hausdorff measure of @math is at most @math , which are the best possible bounds. This result is derived as a corollary of the following projection inequality , which may be of independent interest: The @math -dimensional Hausdorff measure of a (weak) antichain @math cannot exceed the sum of the @math -dimensional Hausdorff measures of the @math orthogonal projections of @math onto the facets of the unit @math -cube containing the origin. For the proof of this result we establish a discrete variant of the projection inequality applicable to weak antichains in @math and combine it with ideas from geometric measure theory." ] }
1908.04090
2967417443
Although many tools have been presented in the research literature of software visualization, there is little evidence of their adoption. To choose a suitable visualization tool, practitioners need to analyze various characteristics of tools such as their supported software concerns and level of maturity. Indeed, some tools can be prototypes for which the lifespan is expected to be short, whereas others can be fairly mature products that are maintained for a longer time. Although such characteristics are often described in papers, we conjecture that practitioners willing to adopt software visualizations require additional support to discover suitable visualization tools. In this paper, we elaborate on our efforts to provide such support. To this end, we systematically analyzed research papers in the literature of software visualization and curated a catalog of 70 available tools that employ various visualization techniques to support the analysis of multiple software concerns. We further encapsulate these characteristics in an ontology. VISON, our software visualization ontology, captures these semantics as concepts and relationships. We report on early results of usage scenarios that demonstrate how the ontology can support (i) developers to find suitable tools for particular development concerns, and (ii) researchers who propose new software visualization tools to identify a baseline tool for a controlled experiment.
Some studies examine software visualization tools, in particular, to create guidelines for designing and evaluating software visualizations. For example, Storey al @cite_52 examine 12 software visualization tools and propose a framework to evaluate software visualizations based on intent, information, presentation, interaction, and effectiveness. Sensalire al @cite_59 @cite_6 classify the features users require in software visualization tools. To this end, they elaborate on lessons learned from evaluating 20 software visualization tools and identify dimensions that can help design an evaluation and then analyze the results. In our investigation, we do not attempt to provide a comprehensive catalog of software visualization tools, but we seek to provide a means to boost software visualization discoverability.
{ "cite_N": [ "@cite_52", "@cite_59", "@cite_6" ], "mid": [ "2067172306", "2014707001", "2147787175" ], "abstract": [ "This paper proposes a framework for describing, comparing and understanding visualization tools that provide awareness of human activities in software development. The framework has several purposes -- it can act as a formative evaluation mechanism for tool designers; as an assessment tool for potential tool users; and as a comparison tool so that tool researchers can compare and understand the differences between various tools and identify potential new research areas. We use this framework to structure a survey of visualization tools for activity awareness in software development. Based on this survey we suggest directions for future research.", "We provide an evaluation of 15 software visualization tools applicable to corrective maintenance. The tasks supported as well as the techniques used are presented and graded based on the support level. By analyzing user acceptation of current tools, we aim to help developers to select what to consider, avoid or improve in their next releases. Tool users can also recognize what to broadly expect (and what not) from such tools, thereby supporting an informed choice for the tools evaluated here and for similar tools.", "Many software visualization (SoftVis) tools are continuously being developed by both researchers as well as software development companies. In order to determine if the developed tools are effective in helping their target users, it is desirable that they are exposed to a proper evaluation. Despite this, there is still lack of a general guideline on how these evaluations should be carried out and many of the tool developers perform very limited or no evaluation of their tools. Each person that carries out one evaluation, however, has experiences which, if shared, can guide future evaluators. This paper presents the lessons learned from evaluating over 20 SoftVis tools with over 90 users in five different studies spread on a period of over two years. The lessons covered include the selection of the tools, tasks, as well as evaluation participants. Other discussed points are related to the duration of the evaluation experiment, its location, the procedure followed when carrying out the experiment, as well as motivation of the participants. Finally, an analysis of the lessons learned is shown with the hope that these lessons will be of some assistance to future SoftVis tool evaluators." ] }
1908.04090
2967417443
Although many tools have been presented in the research literature of software visualization, there is little evidence of their adoption. To choose a suitable visualization tool, practitioners need to analyze various characteristics of tools such as their supported software concerns and level of maturity. Indeed, some tools can be prototypes for which the lifespan is expected to be short, whereas others can be fairly mature products that are maintained for a longer time. Although such characteristics are often described in papers, we conjecture that practitioners willing to adopt software visualizations require additional support to discover suitable visualization tools. In this paper, we elaborate on our efforts to provide such support. To this end, we systematically analyzed research papers in the literature of software visualization and curated a catalog of 70 available tools that employ various visualization techniques to support the analysis of multiple software concerns. We further encapsulate these characteristics in an ontology. VISON, our software visualization ontology, captures these semantics as concepts and relationships. We report on early results of usage scenarios that demonstrate how the ontology can support (i) developers to find suitable tools for particular development concerns, and (ii) researchers who propose new software visualization tools to identify a baseline tool for a controlled experiment.
Some other studies present taxonomies that characterize software visualization tools. Myers @cite_14 classifies software visualization tools based on whether they focus on code, data, or algorithms; and whether they are implemented in a static or dynamic fashion. Price al @cite_24 present a taxonomy of software visualization tools based on six dimensions: scope, content, form, method, interaction, and effectiveness. Maletic al @cite_62 propose a taxonomy of five dimensions to classify software visualization tools: tasks, audience, target, representation, and medium. Schots al @cite_54 extend this taxonomy by adding two dimensions: resource requirements of visualizations, and evidence of their utility. Merino al @cite_11 add as a main characteristic of software visualization tools. In their context, needs'' refers to the set of questions that are supported by software visualization tools. Although we consider these studies crucial for reflecting on the software visualization domain, we think that practitioners may require a more comprehensive support to identify a suitable tool. In particular, we believe that the semantics of concepts and their relationships are often missing in taxonomies and other classifications. The use of an ontology enforces the analysis of these relationships, which can play an important role in identifying a suitable visualization tools.
{ "cite_N": [ "@cite_14", "@cite_62", "@cite_54", "@cite_24", "@cite_11" ], "mid": [ "2065131671", "2163225273", "2029863192", "2070921605", "2794606420" ], "abstract": [ "The Garnet research project, which is creating a set of tools to aid the design and implementation of highly interactive, graphical, direct-manipulation user interfaces, is discussed. Garnet also helps designers rapidly develop prototypes for different interfaces and explore various user-interface metaphors during early product design. It emphasizes easy specification of object behavior, often by demonstration and without programming. Garnet contains a number of different components grouped into two layers. The Garnet Toolkit (the lower layer) supplies the object-oriented graphics system and constraints, a set of techniques for specifying the objects' interactive behavior in response to the input devices, and a collection of interaction techniques. On top of the Garnet Toolkit layer are a number of tools to make creating user interfaces easier. The components of both layers are described. >", "A number of taxonomies to classify and categorize software visualization systems have been proposed in the past. Most notable are those presented by Price (1993) and Roman (1993). While these taxonomies are an accurate representation of software visualization issues, they are somewhat skewed with respect to current research areas on software visualization. We revisit this important work and propose a number of re-alignments with respect to addressing the software engineering tasks of large-scale development and maintenance. We propose a framework to emphasize the general tasks of understanding and analysis during development and maintenance of large-scale software systems. Five dimensions relating to the what, where, how, who, and why of software visualization make up this framework. The focus of this work is not so much as to classify software visualization system, but to point out the need for matching the method with the task. Finally, a number of software visualization systems are examined under our framework to highlight the particular problems each addresses.", "Visualization approaches support stakeholders in a variety of tasks. However, they are spread in the literature and their information is usually not clearly organized, classified and categorized, which makes them hard to be found and used in practice. This paper presents the use of a task-oriented framework in the context of a characterization study of visualizations that provide support for software reuse tasks. Such framework was extended in order to capture more detailed information that may be useful for assessing the suitability of a particular visualization. Besides enabling a better organization of the findings, the use of the extended framework allows to identify aspects that lack more support, indicating opportunities for researchers on software reuse and software visualization.", "In the early 1980s researchers began building systems to visualize computer programs and algorithms using newly emerging graphical workstation technology. After more than a decade of advances in interface technology, a large variety of systems has been built and many different aspects of the visualization process have been investigated. As in any new branch of a science, a taxonomy is required so that researchers can use a common language to discuss the merits of existing systems, classify new ones (to see if they really are new) and identify gaps which suggest promising areas for further development. Several authors have suggested taxonomies for these visualization systems, but they have been ad hoc and have relied on only a handful of characteristics to describe a large and diverse area of work. Another major drawback of these taxonomies is their inability to accommodate expansion: there is no clear way to add new categories when the need arises. In this paper we present a detailed taxonomy of systems for the visualization of computer software. This taxonomy was derived from an established black-box model of software and is composed of a hierarchy with six broad categories at the top and over 30 leaf-level nodes at four hierarchical levels. We describe 12 important systems in detail and apply the taxonomy to them in order to illustrate its features. After discussing each system in this context, we analyse its coverage of the categories and present a research agenda for future work in the area.", "" ] }
1908.04008
2967406324
Batch Normalization (BN) (Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and this batch information is considered as batch noise that will be brought to the features of an instance by BN. We offer a point of view that self-attention mechanism can help regulate the batch noise by enhancing instance-specific information. Based on this view, we propose combining BN with a self-attention mechanism to adjust the batch noise and give an attention-based version of BN called Instance Enhancement Batch Normalization (IEBN) which recalibrates channel information by a simple linear transformation. IEBN outperforms BN with a light parameter increment in various visual tasks universally for different network structures and benchmark data sets. Besides, even if under the attack of synthetic noise, IEBN can still stabilize network training with good generalization. The code of IEBN is available at this https URL
The normalization layer is an important component of a deep network. Multiple normalization methods have been proposed for different tasks. Batch Normalization @cite_30 which normalizes input by mini-batch statistics has been a foundation of visual recognition tasks @cite_7 . Instance Normalization @cite_1 performs one instance BN-like normalization and is widely used in generative model @cite_15 @cite_4 . There are some variants of BN, such as, Conditional Batch Normalization @cite_27 for Visual Questioning and Answering, Group Normalization @cite_25 and Batch Renormalization @cite_23 for small batch size training, Adaptive Batch Normalization @cite_28 for domain adaptation and Switchable normalization @cite_10 which learns to select different normalizers for different normalization layers. Among them, Conditional Batch Norm and Batch Renorm adjust the trainable parameters in reparameterization step of BN. Both of them are most related to our work which modifies the trainable scaling parameter.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_28", "@cite_1", "@cite_27", "@cite_23", "@cite_15", "@cite_10", "@cite_25" ], "mid": [ "2949117887", "2962793481", "2194775991", "", "", "2963245493", "2588610957", "2331128040", "2811135961", "2795783309" ], "abstract": [ "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "", "", "It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and linguistic inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the by a linguistic input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet ( ) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial.", "Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that this is due to the dependence of model layer inputs on all the examples in the minibatch, and different activations being produced between training and inference. We propose Batch Renormalization, a simple and effective extension to ensure that the training and inference models generate the same outputs that depend on individual examples rather than the entire minibatch. Models trained with Batch Renormalization perform substantially better than batchnorm when training with small or non-i.i.d. minibatches. At the same time, Batch Renormalization retains the benefits of batchnorm such as insensitivity to initialization and training efficiency.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "We address a learning-to-normalize problem by proposing Switchable Normalization (SN), which learns to select different normalizers for different normalization layers of a deep neural network. SN employs three distinct scopes to compute statistics (means and variances) including a channel, a layer, and a minibatch. SN switches between them by learning their importance weights in an end-to-end manner. It has several good properties. First, it adapts to various network architectures and tasks (see Fig.1). Second, it is robust to a wide range of batch sizes, maintaining high performance even when small minibatch is presented (e.g. 2 images GPU). Third, SN does not have sensitive hyper-parameter, unlike group normalization that searches the number of groups as a hyper-parameter. Without bells and whistles, SN outperforms its counterparts on various challenging benchmarks, such as ImageNet, COCO, CityScapes, ADE20K, and Kinetics. Analyses of SN are also presented. We hope SN will help ease the usage and understand the normalization techniques in deep learning. The code of SN has been made available in this https URL.", "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6 lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries." ] }
1908.04008
2967406324
Batch Normalization (BN) (Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and this batch information is considered as batch noise that will be brought to the features of an instance by BN. We offer a point of view that self-attention mechanism can help regulate the batch noise by enhancing instance-specific information. Based on this view, we propose combining BN with a self-attention mechanism to adjust the batch noise and give an attention-based version of BN called Instance Enhancement Batch Normalization (IEBN) which recalibrates channel information by a simple linear transformation. IEBN outperforms BN with a light parameter increment in various visual tasks universally for different network structures and benchmark data sets. Besides, even if under the attack of synthetic noise, IEBN can still stabilize network training with good generalization. The code of IEBN is available at this https URL
The cooperation of BN and attention dates back to Visual Questioning and Answering (VQA) which inputs an image and an image-related question and then outputs the answer to the question. For this task, Conditional Batch Norm @cite_27 is proposed to influence the feature extraction of an image via the feature collected from the question. A Recurrent Neural Network (RNN) is used to extract the features from the question while a Convolutional Neural Network (CNN), a pre-trained ResNet, performs features selection from the image. The features extracted from the question are conditioned on the shift and scale parameters of the BN in the pre-trained ResNet such that the feature selection of the CNN is question-referenced and the overall networks can handle different reasoning tasks. Note that for VQA, the features from question can be viewed as external attention to guide the training of overall network since those features are external regarding the image. In our work, the IEBN we proposed can also be viewed as a kind of Conditional Batch Norm but the guidance of the network training is using the internal attention since we use self-attention mechanism to extract the information from the image itself.
{ "cite_N": [ "@cite_27" ], "mid": [ "2963245493" ], "abstract": [ "It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and linguistic inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the by a linguistic input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet ( ) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
The importance of the uneven-channel bottleneck in coded caching has been acknowledged in a large number of recent works that seek to understand and ameliorate this limitation @cite_7 @cite_20 @cite_25 @cite_31 @cite_28 @cite_12 @cite_18 @cite_3 @cite_21 @cite_15 @cite_30 @cite_23 @cite_14 @cite_9 @cite_19 @cite_17 . For example, reference @cite_7 focuses on the uneven link-capacity SISO BC where each user experiences a distinct channel strength, and proposes algorithms that outperform the naive implementation of the algorithm of @cite_24 whereby each coded message is transmitted at a rate equal to the rate of the worst user whose message appears in the corresponding XOR operation. Under a similar setting, the work in @cite_31 considered feedback-aided user selection that can maximize the sum-rate as well as increase a fairness criterion that ensures that each user receives their requested file in a timely manner. In the related context of the erasure BC where users have uneven probabilities of erasures, references @cite_28 and @cite_12 showed how an erasure at some users can be exploited as side information at the remaining users in order to increase system performance. Related work can also be found in @cite_18 @cite_3 @cite_21 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_7", "@cite_15", "@cite_28", "@cite_9", "@cite_21", "@cite_3", "@cite_24", "@cite_19", "@cite_23", "@cite_12", "@cite_31", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2963615879", "2806958548", "2767375015", "2535264554", "2740920677", "2515772480", "2885217635", "2963352975", "2745007501", "2106248279", "2921187562", "2963664701", "2756122816", "2963643069", "2598021007", "2963745869", "2944365782" ], "abstract": [ "We consider the cache-aided MISO broadcast channel (BC) in which a multi-antenna transmitter serves @math single-antenna receivers, each equipped with a cache memory. The transmitter has access to partial knowledge of the channel state information. For a symmetric setting, in terms of channel strength levels, partial channel knowledge levels and cache sizes, we characterize the generalized degrees of freedom (GDoF) up to a constant multiplicative factor. The achievability scheme exploits the interplay between spatial multiplexing gains and coded-multicasting gain. On the other hand, a cut-set-based argument in conjunction with a GDoF outer bound for a parallel MISO BC under channel uncertainty is used for the converse. We further show that the characterized order-optimal GDoF is also attained in a decentralized setting, where no coordination is required for content placement in the caches.", "We derive upper bounds on the rate-memory trade-off of cache-aided erasure broadcast channels with K w weak receivers and K s strong receivers. We follow a decentralized placement scenario, where coordination is not needed prior to the delivery phase. We study two setups: a standard scenario without eavesdropper and a wiretap scenario with an external eavesdropper. For both scenarios, we propose joint cache-channel coding schemes that efficiently exploit the cache contents and take into consideration the users' channel characteristics at the same time. We show that the decentralized placement strategy causes only a small increase in delivery rate compared to centralized strategy. Similarly, when cache sizes are moderate, the rate is increased only slightly by securing the communication against external eavesdroppers. This is not the case when cache memories are small and large.", "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is proposed to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. Utilizing the multiantenna multicasting opportunities provided by the CC technique, the proposed method is shown to perform well over the entire SNR region, including the low SNR regime, unlike the existing schemes based on zero forcing (ZF). Instead of nulling the interference at users not requiring a specific coded message, general multicast beamforming strategies are employed, optimally balancing the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several base-line schemes including, the joint ZF and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming.", "Recently, the video traffic over wireless network has been growing dramatically, which causes network congestion during peak hours. Coded caching has been considered as a promising technique to release this burden by providing local caching gain and global caching gain simultaneously. A placement delivery array (PDA) design was formulated to characterize the centralized coded caching schemes. In this paper, the PDA based coded caching design is employed in two aspects. Firstly, we utilize it to characterize the users' priority level. An algorithm to generate a PDA to take users' priority level into consideration is proposed. Secondly, we implement it for wireless video-streaming over wireless network by incorporating rateless codes into it. Further, we establish a test-bed to validate the aforementioned idea. Our results indicate that, the video transmission capability can be significantly improved by exploiting the coded multicasting opportunities created by coded caching scheme, and the effective packet loss recovery can be guaranteed by the rateless transmission. Meanwhile, it is shown that the PDA based coded caching video transmission scheme can be easily modified to support different users' priority level.", "This work explores cache-aided interference management in the absence of channel state information at the transmitters (CSIT), focusing on the setting with K transmitter receiver pairs endowed with caches, where each receiver k is connected to transmitter k via a direct link with normalized capacity 1, and to any other transmitter via a cross link with normalized capacity t ≤ 1. In this setting, we explore how a combination of pre-caching at transmitters and receivers, together with interference enhancement techniques, can a) partially counter the lack of CSIT, and b) render the network self-sufficient, in the sense that the transmitters need not receive additional data after pre-caching. Toward this we present new schemes that blindly harness topology and transmitter-and-receiver caching, to create separate streams, each serving many receivers at a time. Key to the approach here is a combination of rate-splitting, interference enhancement and coded caching.", "We study a content delivery problem in a @math -user erasure broadcast channel such that a content providing server wishes to deliver requested files to users, each equipped with a cache of a finite size. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by the decentralized content placement, we characterize the achievable rate region as a function of the memory sizes and the erasure probabilities for some special cases. The proposed delivery scheme, based on the broadcasting scheme by Wang and , exploits the receiver side information established during the placement phase. Our results can be extended to the centralized content placement as well as multi-antenna broadcast channels with state feedback.", "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is considered to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. The proposed multicast beamforming strategies utilize the multiantenna multicasting opportunities provided by the CC technique and optimally balance the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several baseline schemes including, the joint zero forcing and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming,", "An erasure broadcast network is considered with two disjoint sets of receivers: a set of weak receivers with all-equal erasure probabilities and equal cache sizes and a set of strong receivers with all-equal erasure probabilities and no cache memories. Lower and upper bounds are presented on the capacity-memory tradeoff of this network (the largest rate at which messages can be reliably communicated for given cache sizes). The lower bound is achieved by means of a joint cache-channel coding scheme and significantly improves over traditional schemes based on the separate cache-channel coding . In particular, it is shown that the joint cache-channel coding offers new global caching gains that scale with the number of strong receivers in the network. The upper bound uses bounding techniques from degraded broadcast channels and introduces an averaging argument to capture the fact that the contents of the cache memories are designed before knowing users’ demands. The derived upper bound is valid for all stochastically degraded broadcast channels. The lower and upper bounds match for a single weak receiver (and any number of strong receivers) when the cache size does not exceed a certain threshold. Improved bounds are presented for the special case of a single weak and a single strong receiver with two files and the bounds are shown to match over a large range of cache sizes.", "Motivated by recent efforts to harness millimeter-wave (mmWave) bands, known to have high outage probabilities, we explore a K-user parallel packet-erasure broadcast channel that consists of orthogonal subchannels prone to packet-erasures. Our main result is two-fold. First, in the homogeneous channel where all subchannels have the same erasure probability, we show that the separation principle holds, i.e., coding across subchannels provides no gain. Second, in the heterogeneous channel where the subchannels have different erasure probabilities, we devise a scheme that employs coding across subchannels and show that the principle fails to hold, i.e., coding across subchannels provides a gain. Inspired by this finding, we demonstrate our scheme to be effective in harnessing the mmWave bands. Compared to the current approach in the 4G systems which allocates subchannels to users exclusively, we show that our scheme offers a huge gain. We find the gain to be significant in scenarios where the erasure probabilities are largely different, and importantly to increase with the growth of K. Our result calls for joint coding schemes in future wireless systems to meet growing mobile data demands.", "Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.", "We study downlink beamforming in a single-cell network with a multi-antenna base station (BS) serving cache-enabled users. For a given common rate of the files in the system, we first formulate the minimum transmit power with beamforming at the BS as a non-convex optimization problem. This corresponds to a multiple multicast problem, to which a stationary solution can be efficiently obtained through successive convex approximation (SCA). It is observed that the complexity of the problem grows exponentially with the number of subfiles delivered to each user in each time slot, which itself grows exponentially with the number of users in the system. Therefore, we introduce a low-complexity alternative through time-sharing that limits the number of subfiles that can be received by a user in each time slot. It is shown through numerical simulations that, the reduced-complexity beamforming scheme has minimal performance gap compared to transmitting all the subfiles jointly, and outperforms the state-of-the-art low-complexity scheme at all SNR and rate values with sufficient spatial degrees of freedom, and in the high SNR high rate regime when the number of spatial degrees of freedom is limited.", "We investigate the potentials of applying the coded caching paradigm in wireless networks. In order to do this, we investigate physical layer schemes for downlink transmission from a multiantenna transmitter to several cache-enabled users. As the baseline scheme, we consider employing coded caching on the top of max–min fair multicasting, which is shown to be far from optimal at high-SNR values. Our first proposed scheme, which is near-optimal in terms of DoF, is the natural extension of multiserver coded caching to Gaussian channels. As we demonstrate, its finite SNR performance is not satisfactory, and thus we propose a new scheme in which the linear combination of messages is implemented in the finite field domain, and the one-shot precoding for the MISO downlink is implemented in the complex field. While this modification results in the same near-optimal DoF performance, we show that this leads to significant performance improvement at finite SNR. Finally, we extend our scheme to the previously considered cache-enabled interference channels, and moreover we provide an ergodic rate analysis of our scheme. Our results convey the important message that although directly translating schemes from the network coding ideas to wireless networks may work well at high-SNR values, careful modifications need to be considered for acceptable finite SNR performance.", "A cache-aided broadcast network is studied, in which a server delivers contents to a group of receivers over a packet erasure broadcast channel. The receivers are divided into two sets with regards to their channel qualities: the weak and the strong receivers, where all the weak receivers have statistically worse channel qualities than all the strong receivers. The weak receivers, in order to compensate for the high erasure probability they encounter over the channel, are equipped with cache memories of equal size, while the receivers in the strong set have no caches. Data can be pre-delivered to the weak receivers’ caches over the off-peak traffic period before the receivers reveal their demands. Allowing arbitrary erasure probabilities for the weak and strong receivers, a joint caching and channel coding scheme, which divides each file into several subfiles, and applies a different caching and delivery scheme for each subfile, is proposed. It is shown that all the receivers, even those without any cache memories, benefit from the presence of caches across the network. An information theoretic tradeoff between the cache size and the achievable rate is formulated. It is shown that the proposed scheme improves upon the state-of-the-art in terms of the achievable tradeoff.", "The performance of existing coded caching schemes is sensitive to the worst channel quality, when applied to wireless channels. In this paper, we address this limitation in the following manner: in short-term, we allow transmissions to subsets of users with good channel quality, avoiding users with fades, while in long-term we ensure fairness across the different users. Our online delivery scheme combines (i) joint scheduling and power control for the fading broadcast channel, and (ii) congestion control for ensuring the optimal long-term average performance. By restricting the caching operations to decentralized coded caching proposed in the literature, we prove that our proposed scheme has near-optimal overall performance with respect to the long-term alpha fairness performance. By tuning the coefficient alpha, the operator can differentiate the user performance in terms of video delivery rates achievable by coded caching. We demonstrate via simulations that our scheme outperforms standard coded caching and unicast opportunistic scheduling, which are identified as special cases of our general framework.", "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size.", "We explore the performance of coded caching in a SISO BC setting where some users have higher link capacities than others. Focusing on a binary and fixed topological model where strong links have a fixed normalized capacity 1, and where weak links have reduced normalized capacity T < 1, we identify — as a function of the cache size and T — the optimal throughput performance, within a factor of at most 8. The transmission scheme that achieves this performance, employs a simple form of interference enhancement, and exploits the property that weak links attenuate interference, thus allowing for multicasting rates to remain high even when involving weak users. This approach ameliorates the negative effects of uneven topology in multicasting, now allowing all users to achieve the optimal performance associated to T = 1, even if τ is approximately as low as T ≥ 1 − (1 − w)g where g is the coded-caching gain, and where w is the fraction of users that are weak. This leads to the interesting conclusion that for coded multicasting, the weak users need not bring down the performance of all users, but on the contrary to a certain extent, the strong users can lift the performance of the weak users without any penalties on their own performance. Furthermore for smaller ranges of τ, we also see that achieving the near-optimal performance comes with the advantage that the strong users do not suffer any additional delays compared to the case where T = 1.", "Coded caching can be applied in wireless multi-antenna communications by multicast beamforming coded data chunks to carefully selected user groups and using the existing file fragments in user caches to decode the desired files at each user. However, the number of packets a file should be split into, known as subpacketization, grows exponentially with the network size. We provide a new scheme, which enables the level of subpacketization to be selected freely among a set of predefined values depending on basic network parameters such as antenna and user count. A simple efficiency index is also proposed as a performance indicator at various subpacketization levels. The numerical examples demonstrate that larger subpacketization generally results in better efficiency index and higher symmetric rate, while smaller subpacketization incurs significant loss in the achievable rate. This enables more efficient caching schemes, tailored to the available computational and power resources." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
The uneven-capacity bottleneck was also studied in the presence of multiple transmit antennas @cite_25 @cite_26 . Reference @cite_25 exploited transmit diversity to ameliorate the impact of the worst-user capacity, and showed that employing @math transmit antennas can allow for a transmission sum-rate that scales with @math . Similarly, the work in @cite_26 considered multiple transmit and multiple receive antennas, and designed topology-dependent cache-placement to ameliorate the worst-user effect.
{ "cite_N": [ "@cite_26", "@cite_25" ], "mid": [ "2805337448", "2598021007" ], "abstract": [ "We study the problem of cache-aided communication for cellular networks with multi-user and multiple antennas at finite signal-to-noise ratio. Users are assumed to have non-symmetric links, modeled by wideband fading channels. We show that the problem can be formulated as a linear program, whose solution provides a joint cache allocation along with pre-fetching and fetching schemes that minimize the duration of the communication in the delivery phase. The suggested scheme uses zero-forcing and cached interference subtraction, and hence, allows each user to be served at the rate of its own channel. Thus, this scheme is better than the previously published schemes that are compromised by the poorest user in the communication group. We also consider a special case of the parameters for which we can derive a closed form solution and formulate the optimal power, rate, and cache optimization. This special case shows that the gain of MIMO coded caching goes beyond the throughput. In particular, it is shown that in this case, the cache is used to balance the users such that fairness and throughput are no longer contradicting. More specifically, in this case, strict fairness is achieved jointly with maximizing the network throughput.", "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
In a related line of work, the papers @cite_15 and @cite_30 studied the cache-aided topological interference channel where @math cache-aided transmitters are connected to @math cache-aided receivers, and each transmitter is connected to one receiver via a direct strong'' link and to each of the other receivers via weak'' links. Under the assumption of no channel state information at the transmitters (CSIT), the authors showed how the lack of CSIT can be ameliorated by exploiting the topology of the channel and the multicast nature of the transmissions.
{ "cite_N": [ "@cite_30", "@cite_15" ], "mid": [ "2963615879", "2740920677" ], "abstract": [ "We consider the cache-aided MISO broadcast channel (BC) in which a multi-antenna transmitter serves @math single-antenna receivers, each equipped with a cache memory. The transmitter has access to partial knowledge of the channel state information. For a symmetric setting, in terms of channel strength levels, partial channel knowledge levels and cache sizes, we characterize the generalized degrees of freedom (GDoF) up to a constant multiplicative factor. The achievability scheme exploits the interplay between spatial multiplexing gains and coded-multicasting gain. On the other hand, a cut-set-based argument in conjunction with a GDoF outer bound for a parallel MISO BC under channel uncertainty is used for the converse. We further show that the characterized order-optimal GDoF is also attained in a decentralized setting, where no coordination is required for content placement in the caches.", "This work explores cache-aided interference management in the absence of channel state information at the transmitters (CSIT), focusing on the setting with K transmitter receiver pairs endowed with caches, where each receiver k is connected to transmitter k via a direct link with normalized capacity 1, and to any other transmitter via a cross link with normalized capacity t ≤ 1. In this setting, we explore how a combination of pre-caching at transmitters and receivers, together with interference enhancement techniques, can a) partially counter the lack of CSIT, and b) render the network self-sufficient, in the sense that the transmitters need not receive additional data after pre-caching. Toward this we present new schemes that blindly harness topology and transmitter-and-receiver caching, to create separate streams, each serving many receivers at a time. Key to the approach here is a combination of rate-splitting, interference enhancement and coded caching." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
Recently, significant effort has been made toward understanding the behavior of coded caching in the finite Signal-to-Noise Ratio (SNR) regime with realistic (and thus often uneven) channel qualities. In this direction, the work in @cite_23 showed that a single-stream coded caching message beamformed by an appropriate transmit vector can outperform some existing multi-stream coded caching methods in the low-SNR regime, while references @cite_14 @cite_9 (see also @cite_19 ) revealed the importance of jointly considering caching with multicast beamformer design. Moreover, the work in @cite_17 studied the connection between rate and subpacketization in the multi-antenna environment, accounting for the unevenness naturally brought about by fading.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_19", "@cite_23", "@cite_17" ], "mid": [ "2767375015", "2885217635", "2921187562", "2963664701", "2944365782" ], "abstract": [ "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is proposed to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. Utilizing the multiantenna multicasting opportunities provided by the CC technique, the proposed method is shown to perform well over the entire SNR region, including the low SNR regime, unlike the existing schemes based on zero forcing (ZF). Instead of nulling the interference at users not requiring a specific coded message, general multicast beamforming strategies are employed, optimally balancing the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several base-line schemes including, the joint ZF and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming.", "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is considered to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. The proposed multicast beamforming strategies utilize the multiantenna multicasting opportunities provided by the CC technique and optimally balance the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several baseline schemes including, the joint zero forcing and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming,", "We study downlink beamforming in a single-cell network with a multi-antenna base station (BS) serving cache-enabled users. For a given common rate of the files in the system, we first formulate the minimum transmit power with beamforming at the BS as a non-convex optimization problem. This corresponds to a multiple multicast problem, to which a stationary solution can be efficiently obtained through successive convex approximation (SCA). It is observed that the complexity of the problem grows exponentially with the number of subfiles delivered to each user in each time slot, which itself grows exponentially with the number of users in the system. Therefore, we introduce a low-complexity alternative through time-sharing that limits the number of subfiles that can be received by a user in each time slot. It is shown through numerical simulations that, the reduced-complexity beamforming scheme has minimal performance gap compared to transmitting all the subfiles jointly, and outperforms the state-of-the-art low-complexity scheme at all SNR and rate values with sufficient spatial degrees of freedom, and in the high SNR high rate regime when the number of spatial degrees of freedom is limited.", "We investigate the potentials of applying the coded caching paradigm in wireless networks. In order to do this, we investigate physical layer schemes for downlink transmission from a multiantenna transmitter to several cache-enabled users. As the baseline scheme, we consider employing coded caching on the top of max–min fair multicasting, which is shown to be far from optimal at high-SNR values. Our first proposed scheme, which is near-optimal in terms of DoF, is the natural extension of multiserver coded caching to Gaussian channels. As we demonstrate, its finite SNR performance is not satisfactory, and thus we propose a new scheme in which the linear combination of messages is implemented in the finite field domain, and the one-shot precoding for the MISO downlink is implemented in the complex field. While this modification results in the same near-optimal DoF performance, we show that this leads to significant performance improvement at finite SNR. Finally, we extend our scheme to the previously considered cache-enabled interference channels, and moreover we provide an ergodic rate analysis of our scheme. Our results convey the important message that although directly translating schemes from the network coding ideas to wireless networks may work well at high-SNR values, careful modifications need to be considered for acceptable finite SNR performance.", "Coded caching can be applied in wireless multi-antenna communications by multicast beamforming coded data chunks to carefully selected user groups and using the existing file fragments in user caches to decode the desired files at each user. However, the number of packets a file should be split into, known as subpacketization, grows exponentially with the network size. We provide a new scheme, which enables the level of subpacketization to be selected freely among a set of predefined values depending on basic network parameters such as antenna and user count. A simple efficiency index is also proposed as a performance indicator at various subpacketization levels. The numerical examples demonstrate that larger subpacketization generally results in better efficiency index and higher symmetric rate, while smaller subpacketization incurs significant loss in the achievable rate. This enables more efficient caching schemes, tailored to the available computational and power resources." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
Our work is in the spirit of all the above papers, and it can be seen specifically as an extension of @cite_20 . This reference considered a specific binary topological case, for which it proposed a two-level superposition-based transmission scheme to alleviate the worst-user bottleneck.
{ "cite_N": [ "@cite_20" ], "mid": [ "2963745869" ], "abstract": [ "We explore the performance of coded caching in a SISO BC setting where some users have higher link capacities than others. Focusing on a binary and fixed topological model where strong links have a fixed normalized capacity 1, and where weak links have reduced normalized capacity T < 1, we identify — as a function of the cache size and T — the optimal throughput performance, within a factor of at most 8. The transmission scheme that achieves this performance, employs a simple form of interference enhancement, and exploits the property that weak links attenuate interference, thus allowing for multicasting rates to remain high even when involving weak users. This approach ameliorates the negative effects of uneven topology in multicasting, now allowing all users to achieve the optimal performance associated to T = 1, even if τ is approximately as low as T ≥ 1 − (1 − w)g where g is the coded-caching gain, and where w is the fraction of users that are weak. This leads to the interesting conclusion that for coded multicasting, the weak users need not bring down the performance of all users, but on the contrary to a certain extent, the strong users can lift the performance of the weak users without any penalties on their own performance. Furthermore for smaller ranges of τ, we also see that achieving the near-optimal performance comes with the advantage that the strong users do not suffer any additional delays compared to the case where T = 1." ] }
1908.03864
2968252280
We consider an information theoretic approach to address the problem of identifying fake digital images. We propose an innovative method to formulate the issue of localizing manipulated regions in an image as a deep representation learning problem using the Information Bottleneck (IB), which has recently gained popularity as a framework for interpreting deep neural networks. Tampered images pose a serious predicament since digitized media is a ubiquitous part of our lives. These are facilitated by the easy availability of image editing software and aggravated by recent advances in deep generative models such as GANs. We propose InfoPrint, a computationally efficient solution to the IB formulation using approximate variational inference and compare it to a numerical solution that is computationally expensive. Testing on a number of standard datasets, we demonstrate that InfoPrint outperforms the state-of-the-art and the numerical solution. Additionally, it also has the ability to detect alterations made by inpainting GANs.
Information theory is a powerful framework that is being increasingly adopted to improve various aspects of deep machine learning, e.g., representation learning @cite_36 generalizability & regularization @cite_38 , and for interpreting how deep neural networks function @cite_27 @cite_34 . Mutual information plays a key role in many of these methods. InfoGAN @cite_13 , showed that maximizing the mutual information between the latent code and the generator's output improved the representations learned by a generative adversarial network (GAN) @cite_24 , allowing them to be more disentangled and interpretable. Since mutual information is hard to compute, InfoGAN maximized a variational lower bound @cite_0 . A similar information maximization idea was explored in @cite_36 to improve unsupervised representation learning using the numerical estimator proposed in @cite_31 .
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_24", "@cite_0", "@cite_27", "@cite_31", "@cite_34", "@cite_13" ], "mid": [ "2964209830", "2887997457", "2099471712", "115285041", "2593634001", "2803832867", "2785885194", "2963226019" ], "abstract": [ "", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "The maximisation of information transmission over noisy channels is a common, albeit generally computationally difficult problem. We approach the difficulty of computing the mutual information for noisy channels by using a variational approximation. The resulting IM algorithm is analagous to the EM algorithm, yet maximises mutual information, as opposed to likelihood. We apply the method to several practical examples, including linear compression, population encoding and CDMA.", "Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the ; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on ph compression of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.", "", "The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior. In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent. Here we show that none of these claims hold true in the general case. Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not. Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa. Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent. Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657." ] }
1908.03978
2968137284
Accurate pedestrian counting algorithm is critical to eliminate insecurity in the congested public scenes. However, counting pedestrians in crowded scenes often suffer from severe perspective distortion. In this paper, basing on the straight-line double region pedestrian counting method, we propose a dynamic region division algorithm to keep the completeness of counting objects. Utilizing the object bounding boxes obtained by YoloV3 and expectation division line of the scene, the boundary for nearby region and distant one is generated under the premise of retaining whole head. Ulteriorly, appropriate learning models are applied to count pedestrians in each obtained region. In the distant region, a novel inception dilated convolutional neural network is proposed to solve the problem of choosing dilation rate. In the nearby region, YoloV3 is used for detecting the pedestrian in multi-scale. Accordingly, the total number of pedestrians in each frame is obtained by fusing the result in nearby and distant regions. A typical subway pedestrian video dataset is chosen to conduct experiment in this paper. The result demonstrate that proposed algorithm is superior to existing machine learning based methods in general performance.
Traditional methods used the histogram of oriented gradients(HOG) as the pedestrian-level features and the support vector machine as the classifier to detect pedestrians in specific scenes @cite_10 , but these hand-crafted features severely suffered from light variance and scale variance. The region-based convolutional neural networks(R-CNNs) @cite_17 used features extracted from CNN and improved the performance in detection. This method could be summarized as two stages processing: proposal and classification, but hard to be accelerated. YOLO @cite_3 provided a new one-stage solution for detection and significantly improved the speed. It converted the thought of classification to regression in sub-grids and abandoned the process of proposal. Following YOLO, some methods paid attention to support multiple scales object detection such as SSD @cite_7 and YOLOV3 @cite_6 . Although detection methods have achieved tremendous performance and can be used in sparse crowd scenes, it is hard to substitute density-based methods in crowded scenes.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_3", "@cite_10", "@cite_17" ], "mid": [ "2193145675", "2796347433", "2963037989", "2161969291", "2102605133" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
In the modern world, a high density of wireless networks and huge interference between them make centralized coordination of the networks more and more popular. It allows optimizing network performance and thus increasing total efficiency. While today's wireless networks are mainly optimized to provide high throughput, the growing OPEX of network operators including the payments for energy consumption may shift the paradigm in the near future. Because of the very high number of base stations and access points energy consumption becomes an essential issue for wireless networks. To improve energy efficiency, various approaches can be used, including energy harvesting, improving hardware, network planning, and resource allocation @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2327455150" ], "abstract": [ "After about a decade of intense research, spurred by both economic and operational considerations, and by environmental concerns, energy efficiency has now become a key pillar in the design of communication networks. With the advent of the fifth generation of wireless networks, with millions more base stations and billions of connected devices, the need for energy-efficient system design and operation will be even more compelling. This survey provides an overview of energy-efficient wireless communications, reviews seminal and recent contribution to the state-of-the-art, including the papers published in this special issue, and discusses the most relevant research challenges to be addressed in the future." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
@cite_11 , energy efficiency is defined as the amount of data delivered through a link divided by the consumed energy. The authors of this paper consider a terminal having limited energy and compare the energy efficiency of various Automatic Repeat reQuest (ARQ) protocols.
{ "cite_N": [ "@cite_11" ], "mid": [ "2143915643" ], "abstract": [ "When terminals powered by a finite battery source are used for wireless communications, energy constraints are likely to influence the choice of error control protocols. Therefore, we propose the average number of correctly transmitted packets during the lifetime of the battery as a new metric. In particular, we study the go-back-N retransmission protocol operating over a wireless channel using a finite energy source with a flat power profile. We characterize the sensitivity of the total number of correctly transmitted packets to the choice of the output power level. We then generalize our results to arbitrary power profiles through both a recursive technique and Markov analysis. Finally, we compare the performance of go-back-N with an adaptive error control protocol that slows down the transmission rate when the channel is impaired, and document the advantages." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
When optimizing energy efficiency, it is essential to add circuit power consumption @math to transmit power. Without taking this component into account, the maximum energy efficiency corresponds to the lowest transmission rate @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2154319347" ], "abstract": [ "With explosive growth of high-data-rate applications, more and more energy is consumed in wireless networks to guarantee quality of service. Therefore, energy-efficient communications have been paid increasing attention under the background of limited energy resource and environmental- friendly transmission behaviors. In this article, basic concepts of energy-efficient communications are first introduced and then existing fundamental works and advanced techniques for energy efficiency are summarized, including information-theoretic analysis, OFDMA networks, MIMO techniques, relay transmission, and resource allocation for signaling. Some valuable topics in energy-efficient design are also identified for future research." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
Mentioned above papers consider only a single wireless link. The definition of energy efficiency has to be extended for systems with multiple transmitters and receivers. In @cite_10 , it is done in the following way: where @math is the overall utility function, @math is the link utility function of link @math , @math is the number of links, @math is the rate of link @math , @math is the average transmit power at link @math . The major disadvantage of such a way is that the utility function represents the sum of energy efficiencies of individual links, while the network operator is interested in the total network energy consumption and energy efficiency which is different.
{ "cite_N": [ "@cite_10" ], "mid": [ "2120292867" ], "abstract": [ "While the demand for battery capacity on mobile devices has grown with the increase in high-bandwidth multi-media rich applications, battery technology has not kept up with this demand. Therefore power optimization techniques are becoming increasingly important in wireless system design. Power optimization schemes are also important for interference management in wireless systems as interference resulting from aggressive spectral reuse and high power transmission severely limits system performance. Although power optimization plays a pivotal role in both interference management and energy utilization, little research addresses their joint interaction. In this paper, we develop energy-efficient power optimization schemes for interference-limited communications. Both circuit and transmit powers are considered and energy efficiency is emphasized over throughput. We note that the general power optimization problem in the presence of interference is intractable even when ideal user cooperation is assumed. We first study this problem for a simple two-user network with ideal user cooperation and then develop a practical non-cooperative power optimization scheme. Simulation results show that the proposed scheme improves not only energy efficiency but also spectral efficiency in an interference-limited cellular network." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
@cite_3 , the authors consider some other utility functions. In addition to the sum of energy efficiencies, an example of which is described above, they consider the product of energy efficiencies and so-called Global Energy Efficiency (GEE). Global energy efficiency is defined as the sum of rates divided by the total power consumption of all devices. Fast algorithms are proposed to solve Sum-EE and Prod-EE maximization problems. For GEE maximization problem, the optimal solution is only found when interference is negligible compared to the constant background noise.
{ "cite_N": [ "@cite_3" ], "mid": [ "2060821751" ], "abstract": [ "This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular orthogonal frequency division multiple access system. Three definitions of energy efficiency are considered for system design, accounting for both the radiated and the circuit power. User scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power (either per subcarrier or per base station). The asymptotic noise-limited regime is discussed as a special case. Results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides large savings in terms of dissipated energy. In addition, the performance gap among the considered resource allocation strategies is reduced as the out-of-cluster interference increases." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
The GEE maximization problem can be solved with existing mathematical methods based on the so-called polyblock algorithm @cite_7 . However, this approach is known to converge very slowly when one or more variables are close to zero. While modeling real deployments, we often observed such cases. That is why we use another approach based on the branch-and-bound method that avoids this slow convergence @cite_2 . Although being applied to solve the GEE problem @cite_6 in LTE networks, its applicability for Wi-Fi networks is not straightforward.
{ "cite_N": [ "@cite_6", "@cite_7", "@cite_2" ], "mid": [ "2589615107", "2211148590", "75157487" ], "abstract": [ "The characterization of the global maximum of energy efficiency (EE) problems in wireless networks is a challenging problem due to their nonconvex nature in interference channels. The aim of this paper is to develop a new and general framework to achieve globally optimal solutions. First, the hidden monotonic structure of the most common EE maximization problems is exploited jointly with fractional programming theory to obtain globally optimal solutions with exponential complexity in the number of network links. To overcome the high complexity, we also propose a framework to compute suboptimal power control strategies with affordable complexity. This is achieved by merging fractional programming and sequential optimization. The proposed monotonic framework is used to shed light on the ultimate performance of wireless networks in terms of EE and also to benchmark the performance of the lower-complexity framework based on sequential programming. Numerical evidence is provided to show that the sequential fractional programming framework achieves global optimality in several practical communication scenarios.", "This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.", "This book presents state-of-the-art results and methodologies in modern global optimization, and has been a staple reference for researchers, engineers, advanced students (also in applied mathematics), and practitioners in various fields of engineering. The second edition has been brought up to date and continues to develop a coherent and rigorous theory of deterministic global optimization, highlighting the essential role of convex analysis. The text has been revised and expanded to meet the needs of research, education, and applications for many years to come. Updates for this new edition include: Discussion of modern approaches to minimax, fixed point, and equilibrium theorems, and to nonconvex optimization; Increased focus on dealing more efficiently with ill-posed problems of global optimization, particularly those with hard constraints; Important discussions of decomposition methods for specially structured problems; A complete revision of the chapter on nonconvex quadratic programming, in order to encompass the advances made in quadratic optimization since publication of the first edition. Additionally, this new edition contains entirely new chapters devoted to monotonic optimization, polynomial optimization and optimization under equilibrium constraints, including bilevel programming, multiobjective programming, and optimization with variational inequality constraint. From the reviews of the first edition: The book gives a good review of the topic. The text is carefully constructed and well written, the exposition is clear. It leaves a remarkable impression of the concepts, tools and techniques in global optimization. It might also be used as a basis and guideline for lectures on this subject. Students as well as professionals will profitably read and use it. Mathematical Methods of Operations Research, 49:3 (1999)" ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
Wi-Fi networks impose additional restrictions on solutions of the described problem. Specifically, since Wi-Fi implements CSMA CA, regulatory bodies put limits on the sensitivity threshold. An example of a solution to the GEE problem is shown in paper @cite_0 , where an algorithm based on the branch-and-bound technique was proposed to allocate power in Wi-Fi networks dynamically. Even with a constant traffic load such an algorithm dynamically varies the transmit power and, thus, obtains higher efficiency. In this paper, we generalize the GEE metric to take both power consumption and fairness into account and to develop a global optimization algorithm for green Wi-Fi networks.
{ "cite_N": [ "@cite_0" ], "mid": [ "2886781652" ], "abstract": [ "Ubiquitous densification of wireless networks has brought up the issue of inter-and intra-cell interference. Interference significantly degrades network throughput and leads to unfair channel resource usage, especially in Wi-Fi networks, where even a low interfering signal from a hidden station may cause collisions or block channel access as it is based on carrier sensing. In the paper, we propose a joint power control and channel time scheduling algorithm for such networks, which significantly increases overall network throughput while maintaining fairness. The algorithm is based on branch-and-bound global optimization technique and guarantees that the solution is optimal with user-defined accuracy." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
There are different types of materials used in manufacturing of optical sensors: various polymers including silicone, polyurethane, and thermoplastic elastomers; POFs and hydrogels. Liquid silicone rubber compounds (e.g. Smooth-on Sorta Clear 18 and Techsil RTV27905) are widely used in injection molding to create robust parts. The part quality mainly depends on how well the silicone compounds are mixed during molding. On the other hand, thermoplastic rubbers, as in @cite_21 , provide a better ability to return to their original shape after stretching them to moderate elongations. These can be processed by heating the granules of the thermoplastic elastomer, shaping them under pressure, and then cooling them to solidify. In contrast to silicone rubber and elastomers, polyurethanes can be synthesized by chemical reactions. Polyurethane parts are resistant to wear and tear.
{ "cite_N": [ "@cite_21" ], "mid": [ "2775635818" ], "abstract": [ "Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
With the aim of making wearable and biocompatible parts, technological advances in bioengineering led to the emergence of hydrogels @cite_3 . A hydrogel, a rubbery and transparent material composed mostly of water, can also be a good choice for safe physical HRI.
{ "cite_N": [ "@cite_3" ], "mid": [ "2049609871" ], "abstract": [ "Hydrogels have found wide application in biosensors due to their versatile nature. This family of materials is applied in biosensing either to increase the loading capacity compared to two-dimensional surfaces, or to support biospecific hydrogel swelling occurring subsequent to specific recognition of an analyte. This review focuses on various principles underpinning the design of biospecific hydrogels acting through various molecular mechanisms in transducing the recognition event of label-free analytes. Towards this end, we describe several promising hydrogel systems that when combined with the appropriate readout platform and quantitative approach could lead to future real-life applications." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Using the materials described in Sec , a variety of optical tactile sensors were presented in the literature. The general principle is based on the optical reflection between mediums with different refractive indices. A conventional optical tactile sensor consists of an array of infrared light-emitting diodes (LEDs) and photodetectors. The intensity of the light is usually proportional to the magnitude of the pressure @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2775693038" ], "abstract": [ "The aim of this paper is to report the design of a low-cost plastic optical fiber (POF) pressure sensor, embedded in a mattress. We report the design of a multipoint sensor, a cheap alternative to the most common fiber sensors. The sensor is implemented using Arduino board, standard LEDs for optical communication in POF (λ = 645 nm) and a silicon light sensor. The Super ESKA® plastic fibers were used to implement the fiber intensity sensor, arranged in a 4 × 4 matrix. During the breathing cycles, the force transmitted from the lungs to the thorax is in the order of tens of Newtons, and the respiration rate is of one breath every 2–5 s (0.2–0.5 Hz). The sensor has a resolution of force applied on a single point of 2.2–4.5 N on the normalized voltage output, and a bandwidth of 10 Hz, it is then suitable to monitor the respiration movements. Another issue to be addressed is the presence of hysteresis over load cycles. The sensor was loaded cyclically to estimate the drift of the system, and the hysteresis was found to be negligible." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
GelSight tactile sensor @cite_21 uses a thermoplastic elastomer coated with a reflective membrane highlighted by an LED ring to capture surface textures with a camera. @cite_4 , this sensor was benchmarked in a texture recognition problem. Similarly, researchers of the Bristol Robotics laboratory developed a family of optical tactile sensors that are almost ready for small-scale mass production @cite_9 . Their TacTip sensor uses a commodity image tracker originally used in optical computer mice. It combines an image acquisition system and a digital signal processor, capable of processing the images at 2000 Hz @cite_19 . Thanks to the high image processing rate, they can detect the slippage of a grasped object @cite_18 . @cite_26 , a touch sensor, consisting of 41 silicone rubber markers, a light source, and a camera, estimates the tangential and normal forces by tracking these markers. Markers with different colors are used in GelForce sensor @cite_24 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_9", "@cite_21", "@cite_24", "@cite_19" ], "mid": [ "2837391606", "2067689171", "2962983231", "2781493652", "2775635818", "2125959037", "2793447234" ], "abstract": [ "Slip detection helps to prevent robotic hands from dropping grasped objects and would thus enable complex object manipulation. Here we present a method of detecting slip with a biomimetic optical tactile sensor-the TacTip-that operates by measuring the positions of internal pins embedded in its compliant skin. We investigate whether local pin movement is a strong signal of slip. Accurate and robust discrimination between static and slipping objects is obtained with a support vector machine (accuracy 99.88 ). We then demonstrate performance on a task in which a slipping object must be caught. For fast reaction times, a modified TacTip is made for high-speed data collection. Performance of the slip detection method is then validated under several test conditions, including varying the speed at which slip onset occurs and using novel shaped objects. The proposed methods should apply to tactile sensors that can detect the local velocities of surface movement. The sensor and slip detection methods are also well-suited for integration onto robotic hands for deploying slip control under manipulation.", "To evaluate our three-axis tactile sensor developed in preceding papers, a tactile sensor is mounted on a robotic finger with 3-degrees of freedom. We develop a dual computer system that possesses two computers to enhance processing speed: one is for tactile information processing and the other controls the robotic finger; these computers are connected to a local area network. Three kinds of experiments are performed to evaluate the robotic finger's basic abilities required for dexterous hands. First, the robotic hand touches and scans flat specimens to evaluate their surface condition. Second, it detects objects with parallelepiped and cylindrical contours. Finally, it manipulates a parallelepiped object put on a table by sliding it. Since the present robotic hand performed the above three tasks, we conclude that it is applicable to the dexterous hand in subsequent studies.", "Vision and touch are two of the important sensing modalities for humans and they offer complementary information for sensing the environment. Robots could also benefit from such multi-modal sensing ability. In this paper, addressing for the first time (to the best of our knowledge) texture recognition from tactile images and vision, we propose a new fusion method named Deep Maximum Covariance Analysis (DMCA) to learn a joint latent space for sharing features through vision and tactile sensing. The features of camera images and tactile data acquired from a GelSight sensor are learned by deep neural networks. But the learned features are of a high dimensionality and are redundant due to the differences between the two sensing modalities, which deteriorates the perception performance. To address this, the learned features are paired using maximum covariance analysis. Results of the algorithm on a newly collected dataset of paired visual and tactile data relating to cloth textures show that a good recognition performance of greater than 90 can be achieved by using the proposed DMCA framework. In addition, we find that the perception performance of either vision or tactile sensing can be improved by employing the shared representation space, compared to learning from unimodal data.", "Abstract Tactile sensing is an essential component in human–robot interaction and object manipulation. Soft sensors allow for safe interaction and improved gripping performance. Here we present the TacTip family of sensors: a range of soft optical tactile sensors with various morphologies fabricated through dual-material 3D printing. All of these sensors are inspired by the same biomimetic design principle: transducing deformation of the sensing surface via movement of pins analogous to the function of intermediate ridges within the human fingertip. The performance of the TacTip, TacTip-GR2, TacTip-M2, and TacCylinder sensors is here evaluated and shown to attain submillimeter accuracy on a rolling cylinder task, representing greater than 10-fold super-resolved acuity. A version of the TacTip sensor has also been open-sourced, enabling other laboratories to adopt it as a platform for tactile sensing and manipulation research. These sensors are suitable for real-world applications in tactile perception, ex...", "Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation.", "It is believed that the use of haptic sensors to measure the magnitude, direction, and distribution of a force will enable a robotic hand to perform dexterous operations. Therefore, we develop a new type of finger-shaped haptic sensor using GelForce technology. GelForce is a vision-based sensor that can be used to measure the distribution of force vectors, or surface traction fields. The simple structure of the GelForce enables us to develop a compact finger-shaped GelForce for the robotic hand. GelForce that is developed on the basis of an elastic theory can be used to calculate surface traction fields using a conversion equation. However, this conversion equation cannot be analytically solved when the elastic body of the sensor has a complicated shape such as the shape of a finger. Therefore, we propose an observational method and construct a prototype of the finger-shaped GelForce. By using this prototype, we evaluate the basic performance of the finger-shaped GelForce. Then, we conduct a field test by performing grasping operations using a robotic hand. The results of this test show that using the observational method, the finger-shaped GelForce can be successfully used in a robotic hand.", "Tactile sensing is required for human-like control with robotic manipulators. Multimodality is an essential component for these tactile sensors, for robots to achieve both the perceptual accuracy required for precise control, as well as the robustness to maintain a stable grasp without causing damage to the object or the robot itself. In this study, we present a cheap, 3D-printed, compliant, dual-modal, optical tactile sensor that is capable of both high (temporal) speed sensing, analogous to pain reception in humans and high (spatial) resolution sensing, analogous to the sensing provided by Merkel cell complexes in the human fingertip. We apply three tasks for testing the sensing capabilities in both modes; first, a depth modulation task, requiring the robot to follow a target trajectory using the high-speed mode; second, a high-resolution perception task, where the sensor perceives angle and radial position relative to an object edge; and third, a tactile exploration task, where the robot uses the high-resolution mode to perceive an edge and subsequently follow the object contour. The robot is capable of modulating contact depth using the high-speed mode, high accuracy in the perception task, and accurate control using the high-resolution mode." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Researchers embedded an optical tactile sensor into the multi-modal tactile sensing system of an underwater robot gripper @cite_25 . As in @cite_10 and Optoforce sensor , the sensing principle is based on the light reflection delivered via POFs. The POFs can be used as force sensing elements due to the stray light, which is considered as a drawback in telecommunications @cite_1 . The deformation of a POF increases the losses of the light propagated inside, as the attenuation coefficient increases. In additon, the elasto-optic metamaterial presented in @cite_2 can change its refractive index due to pure bending. Such POFs are fabricated by the chemical vapor deposition technique. Their design generally relies upon the phenomenon of optical interference @cite_0 .
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_2", "@cite_10", "@cite_25" ], "mid": [ "2775693038", "2157950978", "2807271440", "2070970799", "1999111105" ], "abstract": [ "The aim of this paper is to report the design of a low-cost plastic optical fiber (POF) pressure sensor, embedded in a mattress. We report the design of a multipoint sensor, a cheap alternative to the most common fiber sensors. The sensor is implemented using Arduino board, standard LEDs for optical communication in POF (λ = 645 nm) and a silicon light sensor. The Super ESKA® plastic fibers were used to implement the fiber intensity sensor, arranged in a 4 × 4 matrix. During the breathing cycles, the force transmitted from the lungs to the thorax is in the order of tens of Newtons, and the respiration rate is of one breath every 2–5 s (0.2–0.5 Hz). The sensor has a resolution of force applied on a single point of 2.2–4.5 N on the normalized voltage output, and a bandwidth of 10 Hz, it is then suitable to monitor the respiration movements. Another issue to be addressed is the presence of hysteresis over load cycles. The sensor was loaded cyclically to estimate the drift of the system, and the hysteresis was found to be negligible.", "This paper investigates experimentally the change of the refractive index, due to forces such as pulling and pure bending, in an optical fiber fabricated by the CVD technique. It is found that this phenomenon can be interpreted in terms of a simple model of the fiber, that is, a mechanically homogeneous circular rod. We compare the effect of the refractive-index change and that of a geometrical deformation of the fiber on transmission characteristics. A new method based on photoelasticity is also proposed to measure the curvature distribution of a fiber whose axis is deformed by external forces.", "", "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept.", "With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Laboratory prototypes of image-based tactile sensors were reported in @cite_22 and @cite_24 . In these sensing panels, LEDs and photo-diodes camera were placed against a reflecting planar surface. When the surface deforms, it causes changes in reflected beams. These sensors use optical light to detect deformation of the contact surface, which can be used to estimate the force.
{ "cite_N": [ "@cite_24", "@cite_22" ], "mid": [ "2125959037", "1965890979" ], "abstract": [ "It is believed that the use of haptic sensors to measure the magnitude, direction, and distribution of a force will enable a robotic hand to perform dexterous operations. Therefore, we develop a new type of finger-shaped haptic sensor using GelForce technology. GelForce is a vision-based sensor that can be used to measure the distribution of force vectors, or surface traction fields. The simple structure of the GelForce enables us to develop a compact finger-shaped GelForce for the robotic hand. GelForce that is developed on the basis of an elastic theory can be used to calculate surface traction fields using a conversion equation. However, this conversion equation cannot be analytically solved when the elastic body of the sensor has a complicated shape such as the shape of a finger. Therefore, we propose an observational method and construct a prototype of the finger-shaped GelForce. By using this prototype, we evaluate the basic performance of the finger-shaped GelForce. Then, we conduct a field test by performing grasping operations using a robotic hand. The results of this test show that using the observational method, the finger-shaped GelForce can be successfully used in a robotic hand.", "We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction." ] }
1908.03645
2966981412
Qualitative relationships describe how increasing or decreasing one property (e.g. altitude) affects another (e.g. temperature). They are an important aspect of natural language question answering and are crucial for building chatbots or voice agents where one may enquire about qualitative relationships. Recently a dataset about question answering involving qualitative relationships has been proposed, and a few approaches to answer such questions have been explored, in the heart of which lies a semantic parser that converts the natural language input to a suitable logical form. A problem with existing semantic parsers is that they try to directly convert the input sentences to a logical form. Since the output language varies with each application, it forces the semantic parser to learn almost everything from scratch. In this paper, we show that instead of using a semantic parser to produce the logical form, if we apply the generate-validate framework i.e. generate a natural language description of the logical form and validate if the natural language description is followed from the input text, we get a better scope for transfer learning and our method outperforms the state-of-the-art by a large margin of 7.93 .
Our work is related to both the works in semantic parsing @cite_14 @cite_12 @cite_1 @cite_0 @cite_4 and question answering using semantic parsing @cite_11 @cite_5 @cite_13 . The problem of QUAREL is quite similar to the word math problems @cite_16 @cite_6 in the sense that both are story problems and use semantic parsing to translate the input problem to a suitable representation. Our work is also related to the work in @cite_13 that uses generate-validate framework to answer questions w.r.t life cycle text. @cite_13 uses generate-validate framework to verify given facts''. Particularly, it shows how rules can be used to infer new information over raw text without using a semantic parser to create a structured knowledge base. The work in @cite_13 uses a semantic parser to translate the question into one of the predefined forms. In our work, however we use generate-validate for both question and given fact" understanding. The work of @cite_9 is most related to us. @cite_9 proposes two models for QUAREL. One uses a state-of-the-art semantic parser @cite_0 to convert the input problem to the desired logical representation. They call this model QUASP, which obtains an accuracy of $56.1
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_5", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2163274265", "", "2901386711", "2252136820", "2251349042", "2757361303", "2252016937", "2105717194", "2964338272", "2227250678", "2077824186" ], "abstract": [ "This paper presents recent work using the CHILL parser acquisition system to automate the construction of a natural-language interface for database queries. CHILL treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Programming to learn relational control knowledge. Starting with a general framework for constructing a suitable logical form, CHILL is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a preexisting, hand-crafted counterpart. These results demonstrate the ability of a corpus-based system to produce more than purely syntactic representations. They also provide direct evidence of the utility of an empirical approach at the level of a complete natural language application.", "", "Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, \"Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?\" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL", "In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.", "We present an approach for automatically learning to solve algebra word problems. Our algorithm reasons across sentence boundaries to construct and solve a system of linear equations, while simultaneously recovering an alignment of the variables and numbers in these equations to the problem text. The learning algorithm uses varied supervision, including either full equations or just the final answers. We evaluate performance on a newly gathered corpus of algebra word problems, demonstrating that the system can correctly answer almost 70 of the questions in the dataset. This is, to our knowledge, the first learning result for this task.", "", "Machine reading calls for programs that read and understand text, but most current work only attempts to extract facts from redundant web-scale corpora. In this paper, we focus on a new reading comprehension task that requires complex reasoning over a single document. The input is a paragraph describing a biological process, and the goal is to answer questions that require an understanding of the relations between entities and events in the process. To answer the questions, we first predict a rich structure representing the process in the paragraph. Then, we map the question to a formal query, which is executed against the predicted structure. We demonstrate that answering questions via predicted structures substantially improves accuracy over baselines that use shallower representations.", "This paper presents a novel approach to learning to solve simple arithmetic word problems. Our system, ARIS, analyzes each of the sentences in the problem statement to identify the relevant variables and their values. ARIS then maps this information into an equation that represents the problem, and enables its (trivial) solution as shown in Figure 1. The paper analyzes the arithmetic-word problems “genre”, identifying seven categories of verbs used in such problems. ARIS learns to categorize verbs with 81.2 accuracy, and is able to solve 77.7 of the problems in a corpus of standard primary school test questions. We report the first learning results on this task without reliance on predefined templates and make our data publicly available. 1", "", "We consider the problem of learning factored probabilistic CCG grammars for semantic parsing from data containing sentences paired with logical-form meaning representations. Traditional CCG lexicons list lexical items that pair words and phrases with syntactic and semantic content. Such lexicons can be inefficient when words appear repeatedly with closely related lexical content. In this paper, we introduce factored lexicons, which include both lexemes to model word meaning and templates to model systematic variation in word usage. We also present an algorithm for learning factored CCG lexicons, along with a probabilistic parse-selection model. Evaluations on benchmark datasets demonstrate that the approach learns highly accurate parsers, whose generalization performance benefits greatly from the lexical factoring.", "This paper presents intial work on a system that bridges from robust, broad-coverage natural language processing to precise semantics and automated reasoning, focusing on solving logic puzzles drawn from sources such as the Law School Admission Test (LSAT) and the analytic section of the Graduate Record Exam (GRE). We highlight key challenges, and discuss the representations and performance of the prototype system." ] }
1908.03405
2968351798
Early time series classification (eTSC) is the problem of classifying a time series after as few measurements as possible with the highest possible accuracy. The most critical issue of any eTSC method is to decide when enough data of a time series has been seen to take a decision: Waiting for more data points usually makes the classification problem easier but delays the time in which a classification is made; in contrast, earlier classification has to cope with less input data, often leading to inferior accuracy. The state-of-the-art eTSC methods compute a fixed optimal decision time assuming that every times series has the same defined start time (like turning on a machine). However, in many real-life applications measurements start at arbitrary times (like measuring heartbeats of a patient), implying that the best time for taking a decision varies heavily between time series. We present TEASER, a novel algorithm that models eTSC as a two two-tier classification problem: In the first tier, a classifier periodically assesses the incoming time series to compute class probabilities. However, these class probabilities are only used as output label if a second-tier classifier decides that the predicted label is reliable enough, which can happen after a different number of measurements. In an evaluation using 45 benchmark datasets, TEASER is two to three times earlier at predictions than its competitors while reaching the same or an even higher classification accuracy. We further show TEASER's superior performance using real-life use cases, namely energy monitoring, and gait detection.
The techniques used for (TSC) can be broadly categorized into two classes: methods and . Whole series-based methods make use of a point-wise comparison of entire TS like 1-NN Dynamic Time Warping (DTW) @cite_12 . In contrast, feature-based classifiers rely on comparing features generated from substructures of TS. Approaches can be grouped as either using shapelets or bag-of-patterns (BOP). Shapelets are defined as TS subsequences that are maximally representative of a class @cite_26 @cite_21 . The (BOP) model @cite_10 @cite_19 @cite_25 @cite_39 breaks up a TS into a bag of substructures, represents these substructures as discrete features, and finally builds a histogram of feature counts as basis for classification. The recent Word ExtrAction for time SEries cLassification (WEASEL) @cite_10 also conceptually builds on the bag-of-patterns (BOP) approach and is one of the fastest and most accurate classifiers. @cite_1 deep learning networks are applied to TSC. Their best performing full convolutional network (FCN) performs not significantly different from state of the art. @cite_5 presents an overview of deep learning approaches.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_1", "@cite_39", "@cite_19", "@cite_5", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2029438113", "", "2551393996", "", "1968354112", "2892035503", "2581867724", "2141536962", "2099302229" ], "abstract": [ "Classification of time series has been attracting great interest over the past decade. Recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. As we shall show with extensive empirical evaluations in diverse domains, algorithms based on the time series shapelet primitives can be interpretable, more accurate and significantly faster than state-of-the-art classifiers.", "", "We propose a simple but strong baseline for time series classification from scratch with deep neural networks. Our proposed baseline models are pure end-to-end without any heavy preprocessing on the raw data or feature crafting. The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and our exploration of the very deep neural networks with the ResNet structure is also competitive. The global average pooling in our convolutional model enables the exploitation of the Class Activation Map (CAM) to find out the contributing region in the raw data for the specific labels. Our models provides a simple choice for the real world application and a good starting point for the future research. An overall analysis is provided to discuss the generalization capability of our models, learned features, network structures and the classification semantics.", "", "Similarity search is one of the most important and probably best studied methods for data mining. In the context of time series analysis it reaches its limits when it comes to mining raw datasets. The raw time series data may be recorded at variable lengths, be noisy, or are composed of repetitive substructures. These build a foundation for state of the art search algorithms. However, noise has been paid surprisingly little attention to and is assumed to be filtered as part of a preprocessing step carried out by a human. Our Bag-of-SFA-Symbols (BOSS) model combines the extraction of substructures with the tolerance to extraneous and erroneous data using a noise reducing representation of the time series. We show that our BOSS ensemble classifier improves the best published classification accuracies in diverse application areas and on the official UCR classification benchmark datasets by a large margin.", "Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR UEA archive) and 12 multivariate time series datasets. By training 8730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.", "Time series (TS) occur in many scientific and commercial applications, ranging from earth surveillance to industry automation to the smart grids. An important type of TS analysis is classification, which can, for instance, improve energy load forecasting in smart grids by detecting the types of electronic devices based on their energy consumption profiles recorded by automatic sensors. Such sensor-driven applications are very often characterized by (a) very long TS and (b) very large TS datasets needing classification. However, current methods to time series classification (TSC) cannot cope with such data volumes at acceptable accuracy; they are either scalable but offer only inferior classification quality, or they achieve state-of-the-art classification quality but cannot scale to large data volumes. In this paper, we present WEASEL (Word ExtrAction for time SEries cLassification), a novel TSC method which is both fast and accurate. Like other state-of-the-art TSC methods, WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more accurate than the best current non-ensemble algorithms at orders-of-magnitude lower classification and training times, and it is almost as accurate as ensemble classifiers, whose computational complexity makes them inapplicable even for mid-size datasets. The outstanding robustness of WEASEL is also confirmed by experiments on two real smart grid datasets, where it out-of-the-box achieves almost the same accuracy as highly tuned, domain-specific methods.", "For more than a decade, time series similarity search has been given a great deal of attention by data mining researchers. As a result, many time series representations and distance measures have been proposed. However, most existing work on time series similarity search relies on shape-based similarity matching. While some of the existing approaches work well for short time series data, they typically fail to produce satisfactory results when the sequence is long. For long sequences, it is more appropriate to consider the similarity based on the higher-level structures. In this work, we present a histogram-based representation for time series data, similar to the \"bag of words\" approach that is widely accepted by the text mining and information retrieval communities. We performed extensive experiments and show that our approach outperforms the leading existing methods in clustering, classification, and anomaly detection on dozens of real datasets. We further demonstrate that the representation allows rotation-invariant matching in shape datasets.", "Most time series data mining algorithms use similarity search as a core subroutine, and thus the time taken for similarity search is the bottleneck for virtually all time series data mining algorithms. The difficulty of scaling search to large datasets largely explains why most academic work on time series data mining has plateaued at considering a few millions of time series objects, while much of industry and science sits on billions of time series objects waiting to be explored. In this work we show that by using a combination of four novel ideas we can search and mine truly massive time series for the first time. We demonstrate the following extremely unintuitive fact; in large datasets we can exactly search under DTW much more quickly than the current state-of-the-art Euclidean distance search algorithms. We demonstrate our work on the largest set of time series experiments ever attempted. In particular, the largest dataset we consider is larger than the combined size of all of the time series datasets considered in all data mining papers ever published. We show that our ideas allow us to solve higher-level time series data mining problem such as motif discovery and clustering at scales that would otherwise be untenable. In addition to mining massive datasets, we will show that our ideas also have implications for real-time monitoring of data streams, allowing us to handle much faster arrival rates and or use cheaper and lower powered devices than are currently possible." ] }
1908.03295
2967131314
We introduce a novel single-shot object detector to ease the imbalance of foreground-background class by suppressing the easy negatives while increasing the positives. To achieve this, we propose an Anchor Promotion Module (APM) which predicts the probability of each anchor as positive and adjusts their initial locations and shapes to promote both the quality and quantity of positive anchors. In addition, we design an efficient Feature Alignment Module (FAM) to extract aligned features for fitting the promoted anchors with the help of both the location and shape transformation information from the APM. We assemble the two proposed modules to the backbone of VGG-16 and ResNet-101 network with an encoder-decoder architecture. Extensive experiments on MS COCO well demonstrate our model performs competitively with alternative methods (40.0 mAP on set) and runs faster (28.6 ).
Cascaded Architecture . Cascaded architecture has been explored a lot for improving classification and refining locations. Viola and Jones @cite_25 trained a series of cascaded weak classifiers to form a strong region classifier for face detection. MR-CNN @cite_2 introduced an iterative bounding box regression by feeding the bounding boxes into RCNN several times to improve the localization accuracy during inference. More recently, Cai al @cite_11 proposed the Cascade R-CNN which achieved more accurate boxes by a sequence of detectors trained with increasing IoU thresholds. Cheng al @cite_12 resampled the hard positive detection boxes and applied a R-CNN to rescore these boxes. Different from the above works which focus on further improving the output detection results in two-stage methods, our framework aims to recognize the positive anchor boxes and promote the anchors for one-stage detection.
{ "cite_N": [ "@cite_11", "@cite_25", "@cite_12", "@cite_2" ], "mid": [ "2964241181", "2164598857", "2962731685", "1932624639" ], "abstract": [ "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn.", "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.", "Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks. In this paper, we analyze failure cases of state-of-the-art detectors and observe that most hard false positives result from classification instead of localization. We conjecture that: (1) Shared feature representation is not optimal due to the mismatched goals of feature learning for classification and localization; (2) multi-task learning helps, yet optimization of the multi-task loss may result in sub-optimal for individual tasks; (3) large receptive field for different scales leads to redundant context information for small objects. We demonstrate the potential of detector classification power by a simple, effective, and widely-applicable Decoupled Classification Refinement (DCR) network. DCR samples hard false positives from the base classifier in Faster RCNN and trains a RCNN-styled strong classifier. Experiments show new state-of-the-art results on PASCAL VOC and COCO without any bells and whistles.", "We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin." ] }
1908.03391
2967874181
Individual identification is essential to animal behavior and ecology research and is of significant importance for protecting endangered species. Red pandas, among the world's rarest animals, are currently identified mainly by visual inspection and microelectronic chips, which are costly and inefficient. Motivated by recent advancement in computer-vision-based animal identification, in this paper, we propose an automatic framework for identifying individual red pandas based on their face images. We implement the framework by exploring well-established deep learning models with necessary adaptation for effectively dealing with red panda images. Based on a database of red panda images constructed by ourselves, we evaluate the effectiveness of the proposed automatic individual red panda identification method. The evaluation results show the promising potential of automatically recognizing individual red pandas from their faces. We are going to release our database and model in the public domain to promote the research on automatic animal identification and particularly on the technique for protecting red pandas.
As summarized in Table , automatic individual identification methods have been studied for a number of species, including African penguins @cite_5 , northeast tigers @cite_12 , cattle @cite_11 , lemurs @cite_8 , dairy cows @cite_7 , great white sharks @cite_3 , pandas @cite_0 , primates @cite_18 , pigs @cite_9 , and ringed seals @cite_4 . Different species usually have largely different appearance; however, different individual animals of the same species may differ quite slightly in their appearance, and can be distinguished only by fine-grained detail. Almost all of the related studies are based on specific body parts of an animal to determine its identity. For those species that have salient characteristics in their appearance (e.g., the spots on the breast of penguins @cite_5 , and the rings on the body of ringed seals @cite_4 ), individual identification can be done by extracting and comparing their salient features. For those species that have subtle appearance differences between different individuals, such as pigs @cite_9 , lemurs @cite_8 , and pandas @cite_0 , the most common solution to individual identification is to focus on the body parts with relatively rich textures and extract discriminative features from the parts.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2962837037", "2765230784", "2770032835", "2583064257", "2791690647", "2521680879", "", "1489779347", "", "1214023991" ], "abstract": [ "We present a new method of primate face recognition, and evaluate this method on several endangered primates, including golden monkeys, lemurs, and chimpanzees. The three datasets contain a total of 11,637 images of 280 individual primates from 14 species. Primate face recognition performance is evaluated using two existing state-of-the-art open-source systems, (i) FaceNet and (ii) SphereFace, (iii) a lemur face recognition system from literature, and (iv) our new convolutional neural network (CNN) architecture called PrimNet. Three recognition scenarios are considered: verification (1:1 comparison), and both open-set and closed-set identification (1:N search). We demonstrate that PrimNet outperforms all of the other systems in all three scenarios for all primate species tested. Finally, we implement an Android application of this recognition system to be assist primate researchers and conservationists in the wild for individual recognition of primates.", "In order to monitor an animal population and to track individual animals in a non-invasive way, identification of individual animals based on certain distinctive characteristics is necessary. In this study, automatic image-based individual identification of the endangered Saimaa ringed seal (Phoca hispida saimensis) is considered. Ringed seals have a distinctive permanent pelage pattern that is unique to each individual. This can be used as a basis for the identification process. The authors propose a framework that starts with segmentation of the seal from the background and proceeds to various post-processing steps to make the pelage pattern more visible and the identification easier. Finally, two existing species independent individual identification methods are compared with a challenging data set of Saimaa ringed seal images. The results show that the segmentation and proposed post-processing steps increase the identification performance.", "An automatic procedure to identification Holstein dairy cows using tailhead images is proposed.Zernike moments are extracted and used as a shape descriptor of object features.Two groups of feature and different state-of-the-art classifiers are compared.The proposed method aims to precision livestock farming, especially, the individual identification in BCS evaluation system. The implementation of dairy cow identification will be of great significance in precision animal management based on computer vision. In this study, a computer vision technique to identify the individual dairy cows automatically was proposed and evaluated. The tailhead image, which was used as a Region of Interest (ROI), was captured in a dairy farm. Zernike moments were used as descriptors of shape characteristics for the white pattern on the ROI. Two groups of Zernike moments were extracted from the preprocessed image and classified using four alternative classifiers, namely, linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), artificial neural network (ANN) and support vector machines (SVM). The QDA classifier had the highest value, 99.7 , while the SVM classifier had the highest precision, 99.6 . Comprehensively, the QDA and SVM classifiers presented the best performance, with equal F1 score of 0.995. These results show that the low-order Zernike moment feature, along with the QDA and SVM algorithms is an effective approach for individual dairy cow identification and has significant applications in precision animal management.", "Long-term research of known individuals is critical for understanding the demographic and evolutionary processes that influence natural populations. Current methods for individual identification of many animals include capture and tagging techniques and or researcher knowledge of natural variation in individual phenotypes. These methods can be costly, time-consuming, and may be impractical for larger-scale, population-level studies. Accordingly, for many animal lineages, long-term research projects are often limited to only a few taxa. Lemurs, a mammalian lineage endemic to Madagascar, are no exception. Long-term data needed to address evolutionary questions are lacking for many species. This is, at least in part, due to difficulties collecting consistent data on known individuals over long periods of time. Here, we present a new method for individual identification of lemurs (LemurFaceID). LemurFaceID is a computer-assisted facial recognition system that can be used to identify individual lemurs based on photographs. LemurFaceID was developed using patch-wise Multiscale Local Binary Pattern features and modified facial image normalization techniques to reduce the effects of facial hair and variation in ambient lighting on identification. We trained and tested our system using images from wild red-bellied lemurs (Eulemur rubriventer) collected in Ranomafana National Park, Madagascar. Across 100 trials, with different partitions of training and test sets, we demonstrate that the LemurFaceID can achieve 98.7 ± 1.81 accuracy (using 2-query image fusion) in correctly identifying individual lemurs. Our results suggest that human facial recognition techniques can be modified for identification of individual lemurs based on variation in facial patterns. LemurFaceID was able to identify individual lemurs based on photographs of wild individuals with a relatively high degree of accuracy. This technology would remove many limitations of traditional methods for individual identification. Once optimized, our system can facilitate long-term research of known individuals by providing a rapid, cost-effective, and accurate method for individual identification.", "Abstract Identification of individual livestock such as pigs and cows has become a pressing issue in recent years as intensification practices continue to be adopted and precise objective measurements are required (e.g. weight). Current best practice involves the use of RFID tags which are time-consuming for the farmer and distressing for the animal to fit. To overcome this, non-invasive biometrics are proposed by using the face of the animal. We test this in a farm environment, on 10 individual pigs using three techniques adopted from the human face recognition literature: Fisherfaces, the VGG-Face pre-trained face convolutional neural network (CNN) model and our own CNN model that we train using an artificially augmented data set. Our results show that accurate individual pig recognition is possible with accuracy rates of 96.7 on 1553 images. Class Activated Mapping using Grad-CAM is used to show the regions that our network uses to discriminate between pigs.", "This paper discusses the automated visual identification of individual great white sharks from dorsal fin imagery. We propose a computer vision photo ID system and report recognition results over a database of thousands of unconstrained fin images. To the best of our knowledge this line of work establishes the first fully automated contour-based visual ID system in the field of animal biometrics. The approach put forward appreciates shark fins as textureless, flexible and partially occluded objects with an individually characteristic shape. In order to recover animal identities from an image we first introduce an open contour stroke model, which extends multi-scale region segmentation to achieve robust fin detection. Secondly, we show that combinatorial, scale-space selective fingerprinting can successfully encode fin individuality. We then measure the species-specific distribution of visual individuality along the fin contour via an embedding into a global fin space'. Exploiting this domain, we finally propose a non-linear model for individual animal recognition and combine all approaches into a fine-grained multi-instance framework. We provide a system evaluation, compare results to prior work, and report performance and properties in detail.", "", "African penguins (Spheniscus demersus) carry a pattern of black spots on their chests that does not change from season to season during their adult life. Further, as far as we can tell, no two penguins have exactly the same pattern. We have developed a real-time system that can confidently locate African penguins whose chests are visible within video sequences or still images. An extraction of the chest spot pattern allows the generation of a unique biometrical identifier for each penguin. Using these identifiers an authentication of filmed or photographed African penguins against a population database can be performed. This paper provides a detailed technical description of the developed system and outlines the scope and the conditions of application", "", "The increasing growth of the world trade and growing concerns of food safety by consumers need a cutting-edge animal identification and traceability systems as the simple recording and reading of tags-based systems are only effective in eradication programs of national disease. Animal biometric-based solutions, e.g. muzzle imaging system, offer an effective and secure, and rapid method of addressing the requirements of animal identification and traceability systems. In this paper, we propose a robust and fast cattle identification approach. This approach makes use of Local Binary Pattern (LBP) to extract local invariant features from muzzle print images. We also applied different classifiers including Nearest Neighbor, Naive Bayes, SVM and KNN for cattle identification. The experimental results showed that our approach is superior than existed works as ours achieves 99,5 identification accuracy. In addition, the results proved that our proposed method achieved this high accuracy even if the testing images are rotated in various angels or occluded with different parts of their sizes." ] }
1908.03391
2967874181
Individual identification is essential to animal behavior and ecology research and is of significant importance for protecting endangered species. Red pandas, among the world's rarest animals, are currently identified mainly by visual inspection and microelectronic chips, which are costly and inefficient. Motivated by recent advancement in computer-vision-based animal identification, in this paper, we propose an automatic framework for identifying individual red pandas based on their face images. We implement the framework by exploring well-established deep learning models with necessary adaptation for effectively dealing with red panda images. Based on a database of red panda images constructed by ourselves, we evaluate the effectiveness of the proposed automatic individual red panda identification method. The evaluation results show the promising potential of automatically recognizing individual red pandas from their faces. We are going to release our database and model in the public domain to promote the research on automatic animal identification and particularly on the technique for protecting red pandas.
Red pandas obviously belong to those species that have subtle appearance differences between different individuals. Fortunately, their faces have relatively salient textures. According to Table , most methods for the species that do not have salient appearance differences are based on learned features. With learning based models, researchers do not have to manually find out the exact parts that are helpful to identification. Inspired by these works, we build a deep neural network model for identifying individual red pandas based on their face images. Compared with existing animal identification methods, ours is fully automatic. Almost all existing methods are based on pre-cropped pictures of specific body parts, such as the tailhead images of dairy cows @cite_7 and face images of pig @cite_9 . In contrast, our method takes the image of a red panda as input and automatically detect its face, extracts features and matches the features to the ones enrolled in the gallery to determine its identity. In addition, to the best of our knowledge, the research in this paper is the first attempt to image-based automatic individual identification of red pandas.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2791690647", "2770032835" ], "abstract": [ "Abstract Identification of individual livestock such as pigs and cows has become a pressing issue in recent years as intensification practices continue to be adopted and precise objective measurements are required (e.g. weight). Current best practice involves the use of RFID tags which are time-consuming for the farmer and distressing for the animal to fit. To overcome this, non-invasive biometrics are proposed by using the face of the animal. We test this in a farm environment, on 10 individual pigs using three techniques adopted from the human face recognition literature: Fisherfaces, the VGG-Face pre-trained face convolutional neural network (CNN) model and our own CNN model that we train using an artificially augmented data set. Our results show that accurate individual pig recognition is possible with accuracy rates of 96.7 on 1553 images. Class Activated Mapping using Grad-CAM is used to show the regions that our network uses to discriminate between pigs.", "An automatic procedure to identification Holstein dairy cows using tailhead images is proposed.Zernike moments are extracted and used as a shape descriptor of object features.Two groups of feature and different state-of-the-art classifiers are compared.The proposed method aims to precision livestock farming, especially, the individual identification in BCS evaluation system. The implementation of dairy cow identification will be of great significance in precision animal management based on computer vision. In this study, a computer vision technique to identify the individual dairy cows automatically was proposed and evaluated. The tailhead image, which was used as a Region of Interest (ROI), was captured in a dairy farm. Zernike moments were used as descriptors of shape characteristics for the white pattern on the ROI. Two groups of Zernike moments were extracted from the preprocessed image and classified using four alternative classifiers, namely, linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), artificial neural network (ANN) and support vector machines (SVM). The QDA classifier had the highest value, 99.7 , while the SVM classifier had the highest precision, 99.6 . Comprehensively, the QDA and SVM classifiers presented the best performance, with equal F1 score of 0.995. These results show that the low-order Zernike moment feature, along with the QDA and SVM algorithms is an effective approach for individual dairy cow identification and has significant applications in precision animal management." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Deep reinforcement learning has been applied to solve several tasks such as learning to play video-games and robotics problems @cite_4 . In particular, it has been applied to grasp tasks with manipulator robots equipped with grippers, to locomotion tasks and also to humanoid robots @cite_12 @cite_3 . These problems are currently solved in a simulated environment. Several works obtained good results, like @cite_10 that involves the simulation of a robot and its working environment to develop a solution based on deep reinforcement learning algorithms, to solve the grasping problem. Another recent work is @cite_17 that simulate four complex tasks of dexterous manipulation based on deep reinforcement learning. It uses a policy gradient method, in particular, @cite_18 in combination with an imitation learning algorithm called @cite_9 which learns a policy through supervised learning to mimic the demonstrations of an expert.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_9", "@cite_3", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2130801532", "1757796397", "2062122188", "2799034341", "2788575380", "2739330054", "2963411833" ], "abstract": [ "We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris.", "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.", "Reinforcement learning deals with learning optimal or near optimal policies while interacting with the environment. Application domains with many continuous variables are difficult to solve with existing reinforcement learning methods due to the large search space. In this paper, we use a relational representation to define powerful abstractions that allow us to incorporate domain knowledge and re-use previously learned policies in other similar problems. We also describe how to learn useful actions from human traces using a behavioural cloning approach combined with an exploration phase. Since several conflicting actions may be induced for the same abstract state, reinforcement learning is used to learn an optimal policy over this reduced space. It is shown experimentally how a combination of behavioural cloning and reinforcement learning using a relational representation is powerful enough to learn how to fly an aircraft through different points in space and different turbulence conditions.", "Developing visual perception models for active agents and sensorimotor control in the physical world are cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we investigate developing real-world perception for active agents, propose Gibson Environment for this purpose, and showcase a set of perceptual tasks learned therein. Gibson is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism \"Goggles\" enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space.", "In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping. Model-free deep reinforcement learning (RL) has been successfully applied to a range of challenging environments, but the proliferation of algorithms makes it difficult to discern which particular approach would be best suited for a rich, diverse task like grasping. To answer this question, we propose a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects. Off-policy learning enables utilization of grasping data over a wide variety of objects, and diversity is important to enable the method to generalize to new objects that were not seen during training. We evaluate the benchmark tasks against a variety of Q-function estimation methods, a method previously proposed for robotic grasping with deep neural network models, and a novel approach based on a combination of Monte Carlo return estimation and an off-policy correction. Our results indicate that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning, and our analysis of stability sheds light on the relative tradeoffs between the algorithms.", "Learning physics-based locomotion skills is a difficult problem, leading to solutions that typically exploit prior knowledge of various forms. In this paper we aim to learn a variety of environment-aware locomotion skills with a limited amount of prior knowledge. We adopt a two-level hierarchical control framework. First, low-level controllers are learned that operate at a fine timescale and which achieve robust walking gaits that satisfy stepping-target and style objectives. Second, high-level controllers are then learned which plan at the timescale of steps by invoking desired step targets for the low-level controller. The high-level controller makes decisions directly based on high-dimensional inputs, including terrain maps or other suitable representations of the surroundings. Both levels of the control policy are trained using deep reinforcement learning. Results are demonstrated on a simulated 3D biped. Low-level controllers are learned for a variety of motion styles and demonstrate robustness with respect to force-based disturbances, terrain variations, and style interpolation. High-level controllers are demonstrated that are capable of following trails through terrains, dribbling a soccer ball towards a target location, and navigating through static or dynamic obstacles.", "" ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
@cite_3 is an open-source perceptual and physics simulator. This work can be seen as a bridge between learning from a simulator and transfer learning. The main goal of is to facilitate transferring models trained in a simulated environment to the real world.
{ "cite_N": [ "@cite_3" ], "mid": [ "2799034341" ], "abstract": [ "Developing visual perception models for active agents and sensorimotor control in the physical world are cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we investigate developing real-world perception for active agents, propose Gibson Environment for this purpose, and showcase a set of perceptual tasks learned therein. Gibson is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism \"Goggles\" enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Although the previous works seem to obtain good results using deep reinforcement learning, there are a lot of other works that do not show these satisfactory results. A significant example is @cite_13 , which shows good results mainly on tasks with vector observations as input, while most of the tasks with visual observations perform badly.
{ "cite_N": [ "@cite_13" ], "mid": [ "2889987506" ], "abstract": [ "Recent advances in Deep Reinforcement Learning and Robotics have been driven by the presence of increasingly realistic and complex simulation environments. Many of the existing platforms, however, provide either unrealistic visuals, inaccurate physics, low task complexity, or a limited capacity for interaction among artificial agents. Furthermore, many platforms lack the ability to flexibly configure the simulation, hence turning the simulation environment into a black-box from the perspective of the learning system. Here we describe a new open source toolkit for creating and interacting with simulation environments using the Unity platform: Unity ML-Agents Toolkit. By taking advantage of Unity as a simulation platform, the toolkit enables the development of learning environments which are rich in sensory and physical complexity, provide compelling cognitive challenges, and support dynamic multi-agent interaction. We detail the platform design, communication protocol, set of example environments, and variety of training scenarios made possible via the toolkit." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Another problem in deep reinforcement learning, and deep learning in general, is hyper-parameters tuning. Deep reinforcement learning problems involve a huge number of hyper-parameters that affect training process. It is necessary to tune hyper-parameters such as the learning rate, batch size, random seed, architecture of the policy networks, together with correct reward functions, defining and normalizing them before feeding the network. In @cite_0 the authors introduce a large comparison of state of the art problems, algorithms and implementations to highlight how deep reinforcement learning is still heavily weak to hyper-parameters variation.
{ "cite_N": [ "@cite_0" ], "mid": [ "2754517384" ], "abstract": [ "In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Super-resolution: The classical solution to image restoration and super-resolution was to combine multiple data sources ( multiple images obtained at sub-pixel misalignments @cite_6 , or use self-similar patches within a single image @cite_31 @cite_33 ), and then incorporate these within a regularisation constraint total variation @cite_46 . Microscopy has applied super-resolution for volumetric data via depth of field @cite_34 , and through multi-spectral sensing data @cite_48 via sparse coding a machine learning-based super-resolution approach that learns the visual characteristics of the supplied training images, then applies the learnt model within an optimisation framework to enhance detail. More recently, as with all computer vision domains convolutional neural network (CNN) autoencoders have been applied to image @cite_16 @cite_3 and video-upscaling @cite_32 . While symmetric autoencoders have effectively learnt an image transformation between clean and synthetically noisy images @cite_41 . Similarly, Dong @cite_12 trained end-to-end networks to model image up-scaling or super-resolution.
{ "cite_N": [ "@cite_31", "@cite_33", "@cite_41", "@cite_48", "@cite_32", "@cite_6", "@cite_16", "@cite_3", "@cite_46", "@cite_34", "@cite_12" ], "mid": [ "2534320940", "", "2098477387", "2614746365", "2476548250", "2061398022", "2146337213", "1919542679", "", "2746752588", "2964304707" ], "abstract": [ "Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.", "", "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "Most multispectral remote sensors (e.g. QuickBird, IKONOS, and Landsat 7 ETM+) provide low-spatial high-spectral resolution multispectral (MS) or high-spatial low-spectral resolution panchromatic (PAN) images, separately. In order to reconstruct a high-spatial high-spectral resolution multispectral image volume, either the information in MS and PAN images are fused (i.e. pansharpening) or super-resolution reconstruction (SRR) is used with only MS images captured on different dates. Existing methods do not utilize temporal information of MS and high spatial resolution of PAN images together to improve the resolution. In this paper, we propose a multiframe SRR algorithm using pansharpened MS images, taking advantage of both temporal and spatial information available in multispectral imagery, in order to exceed spatial resolution of given PAN images. We first apply pansharpening to a set of multispectral images and their corresponding PAN images captured on different dates. Then, we use the pansharpened multispectral images as input to the proposed wavelet-based multiframe SRR method to yield full volumetric SRR. The proposed SRR method is obtained by deriving the subband relations between multitemporal MS volumes. We demonstrate the results on Landsat 7 ETM+ images comparing our method to conventional techniques.", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the method's capability to produce images in which sharp edges are successfully reconstructed.", "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.", "Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality.", "", "We here report for the first time the synergistic implementation of structured illumination microscopy (SIM) and multifocus microscopy (MFM). This imaging modality is designed to alleviate the prob ...", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Bottom-up pose estimation is driven by image parsing to isolate components, Srinivasan @cite_42 used graph-cuts to parse a subset of salient shapes from an image and group these into a model of a person. Ren @cite_39 recursively splits Canny edge contours into segments, classifying each as a putative body part using cues such as parallelism. Ren @cite_1 also used Bag of Visual Words for implicit pose estimation as part of a pose similarity system for dance video retrieval. More recently studies have begun to leverage the power of convolutional neural networks, following in the wake of the eye-opening results of Krizhevsky @cite_15 on image recognition. In DeepPose, Toshev @cite_24 used a cascade of convolutional neural networks to estimate 2D pose in images. Descriptors learnt by a CNN have also been used in 2D pose estimation from very low-resolution images @cite_52 . Elhayek @cite_50 used MVV with a Convnet to produce 2D pose estimations while Rhodin @cite_55 minimised the edge energy inspired by volume ray casting to deduce the 3D pose.
{ "cite_N": [ "@cite_55", "@cite_42", "@cite_1", "@cite_52", "@cite_39", "@cite_24", "@cite_50", "@cite_15" ], "mid": [ "2495081533", "2018793343", "2089665312", "1950149599", "2161361693", "", "2079846689", "2163605009" ], "abstract": [ "Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation – skeleton, volumetric shape, appearance, and optionally a body surface – and estimates the actor’s motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as a Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume ray casting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, and variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.", "Recognizing humans, estimating their pose and segmenting their body parts are key to high-level image understanding. Because humans are highly articulated, the range of deformations they undergo makes this task extremely challenging. Previous methods have focused largely on heuristics or pairwise part models in approaching this problem. We propose a bottom-up parsing of increasingly more complete partial body masks guided by a parse tree. At each level of the parsing process, we evaluate the partial body masks directly via shape matching with exemplars, without regard to how the parses are formed. The body is evaluated as a whole, not the sum of its constituent parses, unlike previous approaches. Multiple image segmentations are included at each of the levels of the parsing, to augment existing parses or to introduce ones. Our method yields both a pose estimate as well as a segmentation of the human. We demonstrate competitive results on this challenging task with relatively few training examples on a dataset of baseball players with wide pose variation. Our method is comparatively simple and could be easily extended to other objects.", "We describe a system for matching human posture (pose) across a large cross-media archive of dance footage spanning nearly 100 years, comprising digitized photographs and videos of rehearsals and performances. This footage presents unique challenges due to its age, quality and diversity. We propose a forest-like pose representation combining visual structure (self-similarity) descriptors over multiple scales, without explicitly detecting limb positions which would be infeasible for our data. We explore two complementary multi-scale representations, applying passage retrieval and latent Dirichlet allocation (LDA) techniques inspired by the text retrieval domain, to the problem of pose matching. The result is a robust system capable of quickly searching large cross-media collections for similarity to a visually specified query pose. We evaluate over a cross-section of the UK National Research Centre for Dance's (UK-NRCD), and the Siobhan Davies Replay's (SDR) digital dance archives, using visual queries supplied by dance professionals. We demonstrate significant performance improvements over two base-lines: classical single and multi-scale bag of visual words (BoVW) and spatial pyramid kernel (SPK) matching .", "We address the task of articulated pose estimation from video sequences. We consider an interactive setting where the initial pose is annotated in the first frame. Our system synthesizes a large number of hypothetical scenes with different poses and camera positions by applying geometric deformations to the first frame. We use these synthetic images to generate a custom labeled training set for the video in question. This training data is then used to learn a regressor (for future frames) that predicts joint locations from image data. Notably, our training set is so accurate that nearest-neighbor (NN) matching on low-resolution pixel features works well. As such, we name our underlying representation “tiny synthetic videos”. We present quantitative results the Friends benchmark dataset that suggests our simple approach matches or exceed state-of-the-art.", "The goal of this work is to recover human body configurations from static images. Without assuming a priori knowledge of scale, pose or appearance, this problem is extremely challenging and demands the use of all possible sources of information. We develop a framework which can incorporate arbitrary pairwise constraints between body parts, such as scale compatibility, relative position, symmetry of clothing and smooth contour connections between parts. We detect candidate body parts from bottom-up using parallelism, and use various pairwise configuration constraints to assemble them together into body configurations. To find the most probable configuration, we solve an integer quadratic programming problem with a standard technique using linear approximations. Approximate IQP allows us to incorporate much more information than the traditional dynamic programming and remains computationally efficient. 15 hand-labeled images are used to train the low-level part detector and learn the pairwise constraints. We show test results on a variety of images.", "", "We present a novel method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. Our approach unites a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through a combined pose optimization energy. The discriminative part-based pose detection method, implemented using Convolutional Networks (ConvNet), estimates unary potentials for each joint of a kinematic skeleton model. These unary potentials are used to probabilistically extract pose constraints for tracking by using weighted sampling from a pose posterior guided by the model. In the final energy, these constraints are combined with an appearance-based model-to-image similarity term. Poses can be computed very efficiently using iterative local optimization, as ConvNet detection is fast, and our formulation yields a combined pose estimation energy with analytic derivatives. In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
More recently given the success and accuracy of 2D joint estimation @cite_53 , several works lift 2D detections to 3D using learning or geometric reasoning, aiming to recover the missing depth dimension in the images. Sanzari @cite_7 estimates the location of 2D joints, before predicting 3D pose using appearance and probable 3D pose of the discovered parts with a hierarchical Bayesian model. While Zhou @cite_2 integrates 2D, 3D and temporal information to account for uncertainties in the data. The challenge of estimating 3D human pose from MVV is currently less explored, generally casting 3D pose estimation as a coordinate regression task, with the target output being the spatial @math coordinates of a joint with respect to a known root node such as the pelvis. Trumble @cite_43 used a flattened MVV based spherical histogram with a 2D convnet to estimate pose. While Pavlakos @cite_5 used a simple volumetric representation in a 3D convnet for pose estimation and Wei @cite_18 performed related work in aligning pairs of joints to estimate 3D human pose. Differently, Huang @cite_51 constructed a 4-D mesh of the subject from video reconstruction to estimate the 3D pose.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_53", "@cite_43", "@cite_2", "@cite_5", "@cite_51" ], "mid": [ "", "2520324844", "", "2555849634", "2963688992", "2554247908", "2154189068" ], "abstract": [ "", "We introduce a 3D human pose estimation method from single image, based on a hierarchical Bayesian non-parametric model. The proposed model relies on a representation of the idiosyncratic motion of human body parts, which is captured by a subdivision of the human skeleton joints into groups. A dictionary of motion snapshots for each group is generated. The hierarchy ensures to integrate the visual features within the pose dictionary. Given a query image, the learned dictionary is used to estimate the likelihood of the group pose based on its visual features. The full-body pose is reconstructed taking into account the consistency of the connected group poses. The results show that the proposed approach is able to accurately reconstruct the 3D pose of previously unseen subjects.", "", "We propose a human performance capture system employing convolutional neural networks (CNN) to estimate human pose from a volumetric representation of a performer derived from multiple view-point video (MVV).We compare direct CNN pose regression to the performance of an affine invariant pose descriptor learned by a CNN through a classification task. A non-linear manifold embedding is learned between the descriptor and articulated pose spaces, enabling regression of pose from the source MVV. The results are evaluated against ground truth pose data captured using a Vicon marker-based system and demonstrate good generalisation over a range of human poses, providing a system that requires no special suit to be worn by the performer.", "This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables to take into account considerable uncertainties in 2D joint locations. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.", "This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30 on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.", "We present a novel hybrid representation for character animation from 4D Performance Capture (4DPC) data which combines skelet al control with surface motion graphs. 4DPC data are temporally aligned 3D mesh sequence reconstructions of the dynamic surface shape and associated appearance from multiple-view video. The hybrid representation supports the production of novel surface sequences which satisfy constraints from user-specified key-frames or a target skelet al motion. Motion graph path optimisation concatenates fragments of 4DPC data to satisfy the constraints while maintaining plausible surface motion at transitions between sequences. Space-time editing of the mesh sequence using a learned part-based Laplacian surface deformation model is performed to match the target skelet al motion and transition between sequences. The approach is quantitatively evaluated for three 4DPC datasets with a variety of clothing styles. Results for key-frame animation demonstrate production of novel sequences that satisfy constraints on timing and position of less than 1p of the sequence duration and path length. Evaluation of motion-capture-driven animation over a corpus of 130 sequences shows that the synthesised motion accurately matches the target skelet al motion. The combination of skelet al control with the surface motion graph extends the range and style of motion which can be produced while maintaining the natural dynamics of shape and appearance from the captured performance." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Since detecting pose for each frame individually leads to incoherent and jittery predictions over a sequence, many approaches exploit temporal information. Andriluka @cite_28 used tracking-by-detection to associate 2D poses detected in each frame individually and used them to retrieve 3D pose. While Tekin @cite_10 used a CNN to first align bounding boxes of successive frames so that the person in the image is always at the centre of the box and then extracted 3D HOG features over the spatiotemporal volume from which they regress the 3D pose of the central frame. Lin @cite_19 performed a multi-stage sequential refinement using LSTMs @cite_8 to predict 3D pose sequences using previously predicted 2D pose representations and 3D pose. While Hossain @cite_49 learns the temporal context of a sequence using a form of sequence-to-sequence network.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_19", "@cite_49", "@cite_10" ], "mid": [ "", "2080873731", "2963383668", "2769237672", "2963592930" ], "abstract": [ "", "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.", "3D Human articulated pose recovery from monocular image sequences is very challenging due to the diverse appearances, viewpoints, occlusions, and also the human 3D pose is inherently ambiguous from the monocular imagery. It is thus critical to exploit rich spatial and temporal long-range dependencies among body joints for accurate 3D pose sequence prediction. Existing approaches usually manually design some elaborate prior terms and human body kinematic constraints for capturing structures, which are often insufficient to exploit all intrinsic structures and not scalable for all scenarios. In contrast, this paper presents a Recurrent 3D Pose Sequence Machine(RPSM) to automatically learn the image-dependent structural constraint and sequence-dependent temporal context by using a multi-stage sequential refinement. At each stage, our RPSM is composed of three modules to predict the 3D pose sequences based on the previously learned 2D pose representations and 3D poses: (i) a 2D pose module extracting the image-dependent pose representations, (ii) a 3D pose recurrent module regressing 3D poses and (iii) a feature adaption module serving as a bridge between module (i) and (ii) to enable the representation transformation from 2D to 3D domain. These three modules are then assembled into a sequential prediction framework to refine the predicted poses with multiple recurrent stages. Extensive evaluations on the Human3.6M dataset and HumanEva-I dataset show that our RPSM outperforms all state-of-the-art approaches for 3D pose estimation.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
Semi-supervised learning for image classification is an active research topic @cite_29 ; this section focuses on reviewing work closely related to ours, discussing methods that use deep learning with mini-batch optimization over large image collections. Previous work on semi-supervised deep learning differ in whether they use consistency regularization or pseudo-labeling to learn from the unlabeled set @cite_33 , while they all share the use of a cross-entropy loss (or similar) on labeled data.
{ "cite_N": [ "@cite_29", "@cite_33" ], "mid": [ "2963956526", "2936604471" ], "abstract": [ "Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available.", "Semi-supervised learning is becoming increasingly important because it can combine data carefully labeled by humans with abundant unlabeled data to train deep neural networks. Classic methods on semi-supervised learning that have focused on transductive learning have not been fully exploited in the inductive framework followed by modern deep learning. The same holds for the manifold assumption---that similar examples should get the same prediction. In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network. At the core of the transductive method lies a nearest neighbor graph of the dataset that we create based on the embeddings of the same network.Therefore our learning process iterates between these two steps. We improve performance on several datasets especially in the few labels regime and show that our work is complementary to current state of the art." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
Co-training @cite_15 combines several ideas from the previous works, using two (or more) networks trained simultaneously to agree in their predictions (consistency regularization) and disagree in their errors. Here the errors are defined as making different predictions when exposed to adversarial attacks, thus forcing different networks to learn complementary representations for the same samples. Recently, @cite_10 measure the consistency between the current prediction and an additional prediction of the same sample given by an external memory module that keeps track of previous representations of a sample. They additionally introduce an uncertainty weighting of the consistency term to reduce the contribution of uncertain sample predictions given by the memory module. Consistency regularization methods such as @math -model @cite_1 , mean teachers @cite_34 , and VAT @cite_4 have all been shown to benefit from the recent stochastic weight averaging (SWA) method @cite_18 @cite_38 . SWA averages network parameters at different training epochs to move the SGD solution on borders of flat loss regions to their center and improve generalization.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_1", "@cite_15", "@cite_34", "@cite_10" ], "mid": [ "2909869271", "2792287754", "2964159205", "2951970475", "2962804657", "2592691248", "2895771689" ], "abstract": [ "", "Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much broader optima than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead.", "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.", "In this paper, we study the problem of semi-supervised image recognition, which is to learn classifiers using both labeled and unlabeled images. We present Deep Co-Training, a deep learning based method inspired by the Co-Training framework. The original Co-Training learns two classifiers on two views which are data from different sources that describe the same instances. To extend this concept to deep learning, Deep Co-Training trains multiple deep neural networks to be the different views and exploits adversarial examples to encourage view difference, in order to prevent the networks from collapsing into each other. As a result, the co-trained networks provide different and complementary information about the data, which is necessary for the Co-Training framework to achieve good results. We test our method on SVHN, CIFAR-10 100 and ImageNet datasets, and our method outperforms the previous state-of-the-art methods by a large margin.", "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .", "We consider the semi-supervised multi-class classification problem of learning from sparse labelled and abundant unlabelled training data. To address this problem, existing semi-supervised deep learning methods often rely on the up-to-date “network-in-training” to formulate the semi-supervised learning objective. This ignores both the discriminative feature representation and the model inference uncertainty revealed by the network in the preceding learning iterations, referred to as the memory of model learning. In this work, we propose a novel Memory-Assisted Deep Neural Network (MA-DNN) capable of exploiting the memory of model learning to enable semi-supervised learning. Specifically, we introduce a memory mechanism into the network training process as an assimilation-accommodation interaction between the network and an external memory module. Experiments demonstrate the advantages of the proposed MA-DNN model over the state-of-the-art semi-supervised deep learning methods on three image classification benchmark datasets: SVHN, CIFAR10, and CIFAR100." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
It is important to highlight a widely used practice @cite_27 @cite_1 @cite_34 @cite_15 @cite_39 @cite_33 : a warm-up where labeled samples have a higher (or full) weight at the beginning of training to palliate the incorrect guidance of unlabeled samples early in training. The authors in @cite_29 also reveal some limitations of current practices in semi-supervised learning such as low quality fully-supervised frameworks, absence of comparison with transfer learning baselines, and pointing out issues related to excessive hyperparameter tuning on large validation sets (not available in real situations in semi-supervised learning).
{ "cite_N": [ "@cite_33", "@cite_29", "@cite_1", "@cite_39", "@cite_27", "@cite_15", "@cite_34" ], "mid": [ "2936604471", "2963956526", "2951970475", "2909986471", "2963435192", "2962804657", "2592691248" ], "abstract": [ "Semi-supervised learning is becoming increasingly important because it can combine data carefully labeled by humans with abundant unlabeled data to train deep neural networks. Classic methods on semi-supervised learning that have focused on transductive learning have not been fully exploited in the inductive framework followed by modern deep learning. The same holds for the manifold assumption---that similar examples should get the same prediction. In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network. At the core of the transductive method lies a nearest neighbor graph of the dataset that we create based on the embeddings of the same network.Therefore our learning process iterates between these two steps. We improve performance on several datasets especially in the few labels regime and show that our work is complementary to current state of the art.", "Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available.", "In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.", "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certainty-driven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.", "Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.", "In this paper, we study the problem of semi-supervised image recognition, which is to learn classifiers using both labeled and unlabeled images. We present Deep Co-Training, a deep learning based method inspired by the Co-Training framework. The original Co-Training learns two classifiers on two views which are data from different sources that describe the same instances. To extend this concept to deep learning, Deep Co-Training trains multiple deep neural networks to be the different views and exploits adversarial examples to encourage view difference, in order to prevent the networks from collapsing into each other. As a result, the co-trained networks provide different and complementary information about the data, which is necessary for the Co-Training framework to achieve good results. We test our method on SVHN, CIFAR-10 100 and ImageNet datasets, and our method outperforms the previous state-of-the-art methods by a large margin.", "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 ." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
The key to success for the generation of an immersive and interactive telepresence experience is the real-time 3D reconstruction of the scene of interest. In particular due to the high computational burden and the huge memory requirements required to process and store large scenes, seminal work on multi-camera telepresence systems @cite_22 @cite_34 @cite_14 @cite_25 @cite_26 @cite_41 with less powerful hardware available at that time faced limitations regarding the capability to capture high-quality 3D models in real-time and to immediately transmit them to remote users. More recently, the emerging progress towards affordable commodity depth sensors including the Microsoft Kinect has successfully been exploited for the development of 3D reconstruction approaches working at room scale @cite_24 @cite_11 @cite_15 @cite_19 . Yet the step towards high-quality reconstructions remained highly challenging due to the high sensor noise as well as temporal inconsistency in the reconstructed data.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_22", "@cite_41", "@cite_24", "@cite_19", "@cite_15", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "2139881008", "2068618856", "1572135045", "1971503540", "353406226", "2082930097", "1526459853", "2042418341", "2188632634", "" ], "abstract": [ "Tele-immersion is a new medium that enables a user to share a virtual space with remote participants. The user is immersed in a rendered 3D-world that is transmitted from a remote site. To acquire this 3D description we apply bi- and trinocular stereo techniques. The challenge is to compute dense stereo range data at high frame rates, since participants cannot easily communicate if the processing cycle or network latencies are long. Moreover, new views of the received 3D-world must be as accurate as possible. We address both issues of speed and accuracy and we propose a method for combining motion and stereo in order to increase speed and robustness.", "The presentation of a human figure is an important topic of research with respect to the application of virtual reality technology to support communication and collaboration. In this paper, an approach to transmitting and presenting the figure of a person at a remote location in real time, an implementation of a prototype system based on this approach, and the evaluation of the prototype system using the approach are described. In our approach, images of a person are captured from all around using multiple cameras, transmitted through a network, and displayed on a revolving flat panel display that is capable of presenting different images according to the orientation of the viewing position; the revolving display presents each image so that it is visible exclusively at the orientation from which it was taken, and consequently, an image of the person that can be viewed from various directions is realized. Through the implementation of the prototype system and experiments, it was confirmed that the proposed approach is feasible and that the prototype system functions effectively.", "A new approach to telepresence is presented in which a multitude of stationary cameras are used to acquire both photometric and depth information A virtual environment is constructed by displaying the acquired data from the remote site in accordance with the head position and orientation of a local participant Shown are preliminary results of a depth image of a human subject calculated from closely spaced video camera positions A user wearing a head mounted display walks around this D data that has been inserted into a D model of a simple room Future systems based on this approach may exhibit more natural and intuitive interaction among participants than current D teleconferencing systems", "In this paper we present a framework for immersive virtual environment intended for remote collaboration and training of physical activities. Our multi-camera system performs full-body 3D reconstruction of human user(s) in real time and renders their image in the virtual space allowing remote users to interact. The paper features a short overview of the technology used for the capturing and reconstruction. Some of the applications where we have successfully demonstrated use of the system in combination with the tele-immersive virtual environment are described. Finally, we address current drawbacks with regard to data capturing and networking and provide some ideas for future work.", "This paper describes an enhanced telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft Kinect^T^Mcolor-plus-depth cameras. Contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 200 million triangles s on a single PC and graphics board. Also presented is a Kinect-based markerless tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Enhancements in calibration, filtering, and data merger were made to improve image quality over a previous version of the system.", "RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.", "This paper presents two novel handheld projector systems for indoor pervasive computing spaces. These projection-based devices are \"aware\" of their environment in ways not demonstrated previously. They offer both spatial awareness, where the system infers location and orientation of the device in 3D space, and geometry awareness, where the system constructs the 3D structure of the world around it, which can encompass the user as well as other physical objects, such as furniture and walls. Previous work in this area has predominantly focused on infrastructure-based spatial-aware handheld projection and interaction. Our prototypes offer greater levels of environment awareness, but achieve this using two opposing approaches; the first infrastructure-based and the other infrastructure-less sensing. We highlight a series of interactions including direct touch, as well as in-air gestures, which leverage the shadow of the user for interaction. We describe the technical challenges in realizing these novel systems; and compare them directly by quantifying their location tracking and input sensing capabilities.", "A new visual medium, Virtualized Reality, immerses viewers in a virtual reconstruction of real-world events. The Virtualized Reality world model consists of real images and depth information computed from these images. Stereoscopic reconstructions provide a sense of complete immersion, and users can select their own viewpoints at view time, independent of the actual camera positions used to capture the event.", "Our long-term vision is to provide a better every-day working environment, with high-fidelity scene reconstruction for life-sized 3D telecollaboration. In particular, we want to provide the user with a true sense of presence with our remote collaborator and their real surroundings, and the ability to share and interact with 3D documents. The challenges related to this vision are enormous and involve many technical tradeoffs, particularly in scene reconstruction. In this paper we present a significant step toward our ultimate goal. By assembling the best of available hardware and software technologies in scene reconstruction, rendering, and distributed scene graph software, members of the National Tele-Immersion Initiative (NTII) are able to demonstrate 3D collaborative, tele-presence over Internet2 between colleagues in remote offices.", "" ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
Recently, a huge step towards an immersive teleconferencing experience has been achieved with the development of the Holoportation system @cite_38 . This system has been implemented based on the Fusion4D framework @cite_6 that allows an accurate 3D reconstruction at real-time rates, as well real-time data transmission and the coupling to AR VR technology. However, real-time performance is achieved based on massive hardware requirements involving several high-end GPUs running on multiple desktop computers and most of the hardware components have to be installed at the local user's side. Furthermore, only an area of limited size that is surrounded by the involved static cameras can be captured which allows the application of this framework for teleconferencing but prevents it from being used for interactive remote exploration of larger live-captured scenes.
{ "cite_N": [ "@cite_38", "@cite_6" ], "mid": [ "2532511219", "2461005315" ], "abstract": [ "We present an end-to-end system for augmented and virtual reality telepresence, called Holoportation. Our system demonstrates high-quality, real-time 3D reconstructions of an entire space, including people, furniture and objects, using a set of new depth cameras. These 3D models can also be transmitted in real-time to remote users. This allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space. From an audio-visual perspective, communicating and interacting with remote users edges closer to face-to-face communication. This paper describes the Holoportation technical system in full, its key interactive capabilities, the application scenarios it enables, and an initial qualitative study of using this new communication medium.", "We contribute a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time. Our algorithm supports both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion. Our approach is highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes. We demonstrate advantages over related real-time techniques that either deform an online generated template or continually fuse depth data nonrigidly into a single reference model. Finally, we show geometric reconstruction results on par with offline methods which require orders of magnitude more processing time and many more RGBD cameras." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
Towards the goal of exploring larger environments as related to the exploration of contaminated scenes envisioned in this work, Mossel and Kr "o ter @cite_5 presented a system that allows interactive VR-based exploration of the captured scene by a single exploration client. Their system benefits from the real-time reconstruction based on current voxel block hashing techniques @cite_20 , however, it only allows scene exploration by one single exploration client, and, yet, the bandwidth requirements of this approach have been reported to be up to 175 ,MBit s. Furthermore, the system relies on the direct transmission of the captured data to the rendering client, which is not designed to handle network interruptions that force the exploration client to reconnect to the reconstruction client and, consequently, scene parts that have been reconstructed during network outage will be lost.
{ "cite_N": [ "@cite_5", "@cite_20" ], "mid": [ "2584684405", "1625949922" ], "abstract": [ "We introduce a novel framework that enables large-scale dense 3D scene reconstruction, data streaming over the network and immersive exploration of the reconstructed environment using virtual reality. The system is operated by two remote entities, where one entity – for instance an autonomous aerial vehicle – captures and reconstructs the environment as well as transmits the data to another entity – such as human observer – that can immersivly explore the 3D scene, decoupled from the view of the capturing entity. The performance evaluation revealed the framework’s capabilities to perform RGB-D data capturing, dense 3D reconstruction, streaming and dynamic scene updating in real time for indoor environments up to a size of 100m2, using either a state-of-the-art mobile computer or a workstation. Thereby, our work provides a foundation for enabling immersive exploration of remotely captured and incrementally reconstructed dense 3D scenes, which has not shown before and opens up new research aspects in future.", "Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
The recent approach by Stotko al @cite_7 overcomes these problems and allows the on-the-fly scene inspection and interaction by an arbitrary number of exploration clients, and, hence, represents a practical framework for interactive collaboration purposes. Most notably, the system is based on a novel compact Marching Cubes (MC) based voxel block representation maintained on a server. Efficient streaming at low-bandwidth requirements is achieved by transmitting MC indices and reconstructing and storing the models explored by individual exploration clients directly on their hardware. This makes the approach both scalable to many-client-exploration and robust to network interruptions as the consistent model is generated on the server and the updates are streamed once the connection is re-established.
{ "cite_N": [ "@cite_7" ], "mid": [ "2801778672" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
In Schwarz al @cite_8 , the rescue robot Momaro is described, which is equipped with interfaces for immersive teleoperation using an HMD device and 6D trackers. The immersive display greatly benefited the operators by increasing situational awareness. However, visualization was limited to registered 3D point clouds, which carry no color information. As a result, additional 2D camera images were displayed to the operator to visualize texture. Momaro served as a precursor to the Centauro robot @cite_27 , which extends the Momaro system in several directions, including immersive display of RGB-D data. However, the system is currently limited to displaying live data without aggregation.
{ "cite_N": [ "@cite_27", "@cite_8" ], "mid": [ "2892180807", "2797072841" ], "abstract": [ "Mobile manipulation tasks are one of the key challenges in the field of search and rescue (SAR) robotics requiring robots with flexible locomotion and manipulation abilities. Since the tasks are mostly unknown in advance, the robot has to adapt to a wide variety of terrains and workspaces during a mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and an anthropomorphic upper body to carry out complex tasks in environments too dangerous for humans. Due to its high number of degrees of freedom, controlling the robot with direct teleoperation approaches is challenging and exhausting. Supervised autonomy approaches are promising to increase quality and speed of control while keeping the flexibility to solve unknown tasks. We developed a set of operator assistance functionalities with different levels of autonomy to control the robot for challenging locomotion and manipulation tasks. The integrated system was evaluated in disaster response scenarios and showed promising performance.", "Robots that solve complex tasks in environments too dangerous for humans to enter are desperately needed, e.g. for search and rescue applications. We describe our mobile manipulation robot Momaro, with which we participated successfully in the DARPA Robotics Challenge. It features a unique locomotion design with four legs ending in steerable wheels, which allows it both to drive omnidirectionally and to step over obstacles or climb. Furthermore, we present advanced communication and teleoperation approaches, which include immersive 3D visualization, and 6D tracking of operator head and arm motions. The proposed system is evaluated in the DARPA Robotics Challenge, the DLR SpaceBot Camp 2015, and lab experiments. We also discuss the lessons learned from the competitions and present initial steps towards autonomous operator assistance functions." ] }
1908.03020
2966062579
We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if 'things had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 45 higher in this paper's four case studies.
Early work seeking to provide explanations to neural networks have been focused on the extraction of symbolic knowledge from trained networks @cite_18 , either decision trees in the case of feedforward networks @cite_10 or graphs in the case of recurrent networks @cite_5 @cite_3 . More recently, attention has been shifted from global to local explanation models due to the very large-scale nature of current deep networks, and has been focused on explaining specific network architectures (such as the bottleneck in auto-encoders @cite_9 ) or domain specific networks such as those used to solve computer vision problems @cite_6 , although some recent approaches continue to advocate the use of rule-based knowledge extraction @cite_15 @cite_2 . The reader is referred to @cite_13 for a recent survey.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_9", "@cite_3", "@cite_6", "@cite_2", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "2786715987", "2063046703", "2753738274", "2771330107", "2963749936", "2556838012", "2142148616", "2962714378", "2113882472" ], "abstract": [ "In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.", "It is becoming increasingly apparent that, without some form of explanation capability, the full potential of trained artificial neural networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically, the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy.", "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.", "Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.", "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.", "Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language—a set of logical rules that we call confidence rules —and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural–symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.", "Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree while having the advantage of being more transparent. RE from RNNs can be argued to allow a deeper and more profound form of analysis of RNNs than other, more or less ad hoc methods. RE may give us understanding of RNNs in the intermediate levels between quite abstract theoretical knowledge of RNNs as a class of computing devices and quantitative performance evaluations of RNN instantiations. The development of techniques for extraction of rules from RNNs has been an active field since the early 1990s. This article reviews the progress of this development and analyzes it in detail. In order to structure the survey and evaluate the techniques, a taxonomy specifically designed for this purpose has been developed. Moreover, important open research issues are identified that, if addressed properly, possibly can give the field a significant push forward.", "", "A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, TREPAN, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that TREPAN is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces." ] }
1908.03020
2966062579
We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if 'things had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 45 higher in this paper's four case studies.
More specifically, @cite_14 have proposed LORE – Local Rule based Explanations, which provides local explanations for binary classification tasks using decision trees. It is model-agnostic, generates local models from synthetic data, has many other similarities to LIME, but it also generates counterfactual explanations. criticise LIME for producing neighbourhood datasets whose observations are too distant from each other and have too low a density around . By contrast LORE uses a genetic algorithm to create neighbourhood datasets with a high density around and the decision boundary. claim that their system outperforms LIME and they provide fidelity statistics comparing LORE and LIME, where fidelity is defined in terms of how well local models perform in making the same classifications as the underlying machine learning system. However, their fidelity statistics for LIME could be misconstrued; it does not follow from being able to mimic a system’s classifications that a local model will also faithfully mimic its counterfactuals (see Section 4).
{ "cite_N": [ "@cite_14" ], "mid": [ "2803532212" ], "abstract": [ "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
The seminal result of @cite_0 showed that consensus cannot be reached in asynchronous systems in the presence of crash faults. @cite_5 showed that it is however possible to reach in an asynchronous system even with arbitrary faulty behavior when the values reside on the continuous real line. Subsequently, the one-dimensional approximate agreement problem has been extensively studied @cite_5 @cite_10 @cite_36 @cite_41 . Fekete @cite_36 showed that any algorithm reducing the distance of values from @math to @math requires @math asynchronous rounds when @math ; in the discrete setting this yields the bound @math for paths of length @math . Recently, @cite_13 introduced the natural generalisation of approximate agreement and showed that the @math -dimensional problem is solvable in an asynchronous system with Byzantine faults if and only if @math holds for any given @math .
{ "cite_N": [ "@cite_13", "@cite_41", "@cite_36", "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "1971773339", "2115307136", "1985434631", "2035362408", "2126906505", "1976693492" ], "abstract": [ "Consider a network of @math n processes, where each process inputs a @math d-dimensional vector of reals. All processes can communicate directly with others via reliable FIFO channels. We discuss two problems. The multidimensional Byzantine consensus problem, for synchronous systems, requires processes to decide on a single @math d-dimensional vector @math v?Rd, inside the convex hull of @math d-dimensional vectors that were input by the non-faulty processes. Also, the multidimensional Byzantine approximate agreement (MBAA) problem, for asynchronous systems, requires processes to decide on multiple @math d-dimensional vectors in @math Rd, all within a fixed Euclidean distance @math ∈ of each other, and inside the convex hull of @math d-dimensional vectors that were input by the non-faulty processes. We obtain the following results for the problems above, while tolerating up to @math f Byzantine failures in systems with complete communication graphs: (1) In synchronous systems, @math n>max 3f,(d+1)f is necessary and sufficient to solve the multidimensional consensus problem. (2) In asynchronous systems, @math n>(d+2)f is necessary and sufficient to solve the multidimensional approximate agreement problem. Our sufficiency proofs are constructive, giving explicit protocols for the problems. In particular, for the MBAA problem, we give two protocols with strictly different properties and applications.", "Consider an asynchronous system where each process begins with an arbitrary real value. Given some fixed e>0, an approximate agreement algorithm must have all non-faulty processes decide on values that are at most e from each other and are in the range of the initial values of the non-faulty processes. Previous constructions solved asynchronous approximate agreement only when there were at least 5t+1 processes, t of which may be Byzantine. In this paper we close an open problem raised by in 1983. We present a deterministic optimal resilience approximate agreement algorithm that can tolerate any t Byzantine faults while requiring only 3t+1 processes. The algorithm's rate of convergence and total message complexity are efficiently bounded as a function of the range of the initial values of the non-faulty processes. All previous asynchronous algorithms that are resilient to Byzantine failures may require arbitrarily many messages to be sent.", "This paper examines the Approximate Agreement Problem in an asynchronous failure-by-omission system with deterministic protocols. We give a simple algorithm, and prove that the algorithm is optimal by considering the power of the \"adversary\" scheduler to disrupt processors? views. We show that the adversary need not cause any omissions to achieve its purpose, and therefore no algorithm can do better than simply to operate round-by-round, as our does. We extend these results to asynchronous crash-failure systems. The resulting understanding of the adversary should be applicable to other problems in asynchronous failure-by-omission or crash-failure-systems.", "The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.", "This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of , who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal.", "This paper introduces some algorithms to solve crash-failure, failure-by-omission and Byzantine failure versions of the Byzantine Generals or consensus problem, where non-faulty processors need only arrive at values that are close together rather than identical. For each failure model and each value ofS, we give at-resilient algorithm usingS rounds of communication. IfS=t+1, exact agreement is obtained. In the algorithms for the failure-by-omission and Byzantine failure models, each processor attempts to identify the faulty processors and corrects values transmited by them to reduce the amount of disagreement. We also prove lower bounds for each model, to show that each of our algorithms has a convergence rate that is asymptotic to the best possible in that model as the number of processors increases." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
The was originally introduced in the context of wait-free algorithms in shared memory models @cite_47 @cite_53 . The problem has recently resurfaced in the context of asynchronous message-passing models with crash faults @cite_28 @cite_56 . These papers consider the problem when the validity condition is given as @math , i.e., the output of a processor must satisfy @math and the feasible area is determined also by the inputs of faulty processors. However, it is not difficult to see that under Byzantine faults, this validity condition is not reasonable, as the problem cannot be solved even with one faulty processor.
{ "cite_N": [ "@cite_28", "@cite_47", "@cite_56", "@cite_53" ], "mid": [ "2151102492", "2095126777", "2886192190", "1997124359" ], "abstract": [ "Lattice agreement is a key decision problem in distributed systems. In this problem, processes start with input values from a lattice, and must learn (non-trivial) values that form a chain. Unlike consensus, which is impossible in the presence of even a single process failure, lattice agreement has been shown to be decidable in the presence of failures. In this paper, we consider lattice agreement problems in asynchronous, message passing systems. We present an algorithm for the lattice agreement problem that guarantees liveness as long as a majority of the processes are non-faulty. The algorithm has a time complexity of O(N) message delays, where N is the number of processes. We then introduce the generalized lattice agreement problem, where each process receives a (potentially unbounded) sequence of values from an infinite lattice and must learn a sequence of increasing values such that the union of all learnt sequences is a chain and every proposed value is eventually learnt. We present a wait-free algorithm for solving generalized lattice agreement. The algorithm guarantees that every value received by a correct process is learnt in O(N) message delays. We show that this algorithm can be used to implement a class of replicated state machines where (a) commands can be classified as reads and updates, and (b) all update commands commute. This algorithm can be used to realize serializable and linearizable replicated versions of commonly used data types.", "The snapshot object is an important tool for constructing wait-free asynchronous algorithms. We relate the snapshot object to the lattice agreement decision problem. It is shown that any algorithm for solving lattice agreement can be transformed into an implementation of a snapshot object. The overhead cost of this transformation is only a linear number of read and write operations on atomic single-writer multi-reader registers. The transformation uses an unbounded amount of shared memory. We present a deterministic algorithm for lattice agreement that used O(log2 n) operations on 2-processor Test & Set registers, plus O(n) operations on atomic single-writer multi-reader registers. The shared objects are used by the algorithm in a dynamic mode, that is, the identity of the processors that access each of the shared objects is determined dynamically during the execution of the algorithm. By a randomized implementation of 2-processors Test & Set registers from atomic registers, this algorithm implies a randomized algorithm for lattice agreement that uses an expected number of O(n) operations on (dynamic) atomic single-writer multi-reader registers. Combined with our transformation this yields implementations of atomic snapshots with the same complexity.", "This paper studies the lattice agreement problem and the generalized lattice agreement problem in distributed message passing systems. In the lattice agreement problem, given input values from a lattice, processes have to non-trivially decide output values that lie on a chain. We consider the lattice agreement problem in both synchronous and asynchronous systems. For synchronous lattice agreement, we present two algorithms which run in @math and @math rounds, respectively, where @math denotes the height of the input sublattice @math , @math is the number of crash failures the system can tolerate, and @math is the number of processes in the system. These algorithms have significant better round complexity than previously known algorithms. The algorithm by attiya1995atomic takes @math synchronous rounds, and the algorithm by Mavronicolasa mavronicolasabound takes @math rounds. For asynchronous lattice agreement, we propose an algorithm which has time complexity of @math message delays which improves on the previously known time complexity of @math message delays. The generalized lattice agreement problem defined by in faleiro2012generalized is a generalization of the lattice agreement problem where it is applied for the replicated state machine. We propose an algorithm which guarantees liveness when a majority of the processes are correct in asynchronous systems. Our algorithm requires @math units of time in the worst case which is better than @math units of time required by the algorithm of faleiro2012generalized .", "In a shared-memory system, n independent asynchronous processes, with distinct names in the range 0, ..., N-1 , communicate by reading and writing to shared registers. An algorithm is wait-free if a process completes its execution regardless of the behavior of other processes. This paper considers wait-free algorithms whose complexity adjusts to the level of contention in the system: An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number of active processes, k; this number is unknown in advance and may change in different executions of the algorithm. Adaptive algorithms are presented for two important decision problems, lattice agreement and (6k-1)-renaming; the step complexity of both algorithms is O(k log k). An interesting component of the (6k-1)-renaming algorithm is an O(N) algorithm for (2k-1)-renaming; this improves on the best previously known (2k-1)-renaming algorithm, which has O(Nnk) step complexity. The efficient renaming algorithm can be modified into an O(N) implementation of atomic snapshots using dynamic single-writer multi-reader registers. The best known implementations of atomic snapshots have step complexity O(N log N) using static single-writer multi-reader registers, and O(N) using multi-writer multi-reader registers." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
Another class of structured agreement problems in the wait-free asynchronous setting are tasks @cite_32 , which generalise @math -set agreement and approximate agreement (e.g., @math -set agreement and one-dimensional approximate agreement). In loop agreement, the set of inputs consists of three distinct vertices on a loop in a 2-dimensional simplicial complex and the outputs are vertices of the complex with certain constraints, whereas are a generalisation of loop agreement to higher dimensions @cite_39 . These tasks are part of large body of work exploring the deep connection of asynchronous computability and combinatorial topology, which has successfully been used to characterise the of various distributed tasks @cite_11 . Gafni and Kuznetsov's @math -reconciliation task @cite_17 achieves geodesic approximate agreement on a graph of system configurations.
{ "cite_N": [ "@cite_17", "@cite_11", "@cite_32", "@cite_39" ], "mid": [ "1522404823", "1526652053", "2011451665", "2162458446" ], "abstract": [ "Objects like queue, swap, and test-and-set allow two processes to reach consensus, and are consequently \"universal\" for a system of two processes. But are there deterministic objects that do not solve 2-process consensus, and nevertheless allow two processes to solve a task that is not otherwise wait-free solvable in read-write shared memory? The answer \"no\" is a simple corollary of the main result of this paper: Let A be a deterministic object such that no protocol solves consensus among n+1 processes using copies of A and read-write registers. If a task T is wait-free solvable by n + 1 processes using read-write shared-memory and copies of A, then T is also wait-free solvable when copies of A are replaced with n-consensus objects. Thus, from the task-solvability perspective, n-consensus is the second strongest object (after (n+1)-consensus) in deterministic shared memory systems of n+1 processes, i.e., there is a distinct gap between n- and (n + 1)-consens. We derive this result by showing that any (n+1)-process protocol P that uses objects A can be emulated using only n-consensus objects. The resulting emulation is non-blocking and relies on an a priori knowledge of P. The emulation technique is another important contribution of this paper.", "Distributed Computing Through Combinatorial Topology describes techniques for analyzing distributed algorithms based on award winning combinatorial topology research. The authors present a solid theoretical foundation relevant to many real systems reliant on parallelism with unpredictable delays, such as multicore microprocessors, wireless networks, distributed systems, and Internet protocols. Today, a new student or researcher must assemble a collection of scattered conference publications, which are typically terse and commonly use different notations and terminologies. This book provides a self-contained explanation of the mathematics to readers with computer science backgrounds, as well as explaining computer science concepts to readers with backgrounds in applied mathematics. The first section presents mathematical notions and models, including message passing and shared-memory systems, failures, and timing models. The next section presents core concepts in two chapters each: first, proving a simple result that lends itself to examples and pictures that will build up readers' intuition; then generalizing the concept to prove a more sophisticated result. The overall result weaves together and develops the basic concepts of the field, presenting them in a gradual and intuitively appealing way. The book's final section discusses advanced topics typically found in a graduate-level course for those who wish to explore further. Gathers knowledge otherwise spread across research and conference papers using consistent notations and a standard approach to facilitate understandingPresents unique insights applicable to multiple computing fields, including multicore microprocessors, wireless networks, distributed systems, and Internet protocols Synthesizes and distills material into a simple, unified presentation with examples, illustrations, and exercises", "Loop agreement is a family of wait-free tasks that includes instances of set agreement and approximate agreement tasks. A task G implements task F if one can construct a solution to F from a solution to G, possibly followed by access to a read write memory. Loop agreement tasks form a lattice under this notion of implementation.This paper presents a classification of loop agreement tasks. Each loop agreement task can be assigned an algebraic signature consisting of a finitely presented group G and a distinguished element g in G. This signature characterizes the task's power to implement other tasks. If F and G are loop agreement tasks with respective signatures 〈F,f〉 and 〈G,g〉, then F implements G if and only if there exists a group homomorphism h : F → G carrying f to g.", "The rendezvous is a type of distributed decision tasks including many well-known tasks such as set agreement, simplex agreement, and approximation agreement. An n-dimensional rendezvous task, n>=1, allows n+2 distinct input values, and each execution produces at most n+2 distinct output values. A rendezvous task is said to implement another if an instance of its solution, followed by a protocol based on shared read write registers, solves the other. The notion of implementation induces a classification of rendezvous tasks of every dimension: two tasks belong to the same class if they implement each other. Previous work on classifying rendezvous tasks only focused on 1-dimensional ones. This paper solves an open problem by presenting the classification of nice rendezvous of arbitrary dimension. An n-dimensional rendezvous task is said to be nice if the qth reduced homology group of its decision space is trivial for q n, and free for q=n. Well-known examples are set agreement, simplex agreement, and approximation agreement. Each n-dimensional rendezvous task is assigned an algebraic signature, which consists of the nth homology group of the decision space, as well as a distinguished element in the group. It is shown that an n-dimensional nice rendezvous task implements another if and only if there is a homomorphism from its signature to that of the other. Hence the computational power of a nice rendezvous task is completely characterized by its signature. In each dimension, there are infinitely many classes of rendezvous tasks, and exactly countable classes of nice ones. A representative is explicitly constructed for each class of nice rendezvous tasks." ] }
1908.02571
2964743966
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
Classic link prediction methods on social media use graph properties of the social network or NLP feature of nodes to predict links between entities. For example, @cite_3 is base solely on graph features and @cite_8 uses a similar technique for the social networks in healthcare. Meanwhile, @cite_14 uses common words to cluster and rank nodes and based on that predicts the closely-ranked nodes to be connected. Another Study @cite_10 uses a combination of graph features and keyword matches to train classifiers(SVM, Naive Bayes, etc) to predict if a link exists between two nodes.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_3", "@cite_8" ], "mid": [ "2071018679", "2768375068", "2737056279", "1996564243" ], "abstract": [ "In this paper we address the problem of discovering missing hypertext links in Wikipedia. The method we propose consists of two steps: first, we compute a cluster of highly similar pages around a given page, and then we identify candidate links from those similar pages that might be missing on the given page. The main innovation is in the algorithm that we use for identifying similar pages, LTRank, which ranks pages using co-citation and page title information. Both LTRank and the link discovery method are manually evaluated and show acceptable results, especially given the simplicity of the methods and conservativeness of the evaluation criteria.", "Social network analysis has attracted much attention in recent years. Link prediction is a key research directions within this area. In this research, we study link prediction as a supervised learning task. Along the way, we identify a set of features that are key to the superior performance under the supervised learning setup. The identified features are very easy to compute, and at the same time surprisingly effective in solving the link prediction problem. We also explain the effectiveness of the features from their class density distribution. Then we compare different classes of supervised learning algorithms in terms of their prediction performance using various performance metrics, such as accuracy, precision-recall, F-values, squared error etc. with a 5-fold cross validation. Our results on two practical social network datasets shows that most of the well-known classification algorithms (decision tree, k-nn,multilayer perceptron, SVM, rbf network) can predict link with surpassing performances, but SVM defeats all of them with narrow margin in all different performance measures. Again, ranking of features with popular feature ranking algorithms shows that a small subset of features always plays a significant role in the link prediction job.", "With over 300 million active users, Twitter is among the largest online news and social networking services in existence today. Open access to information on Twitter makes it a valuable source of data for research on social interactions, sentiment analysis, content diffusion, link prediction, and the dynamics behind human collective behaviour in general. Here we use Twitter data to construct co-occurrence language networks based on hashtags and based on all the words in tweets, and we use these networks to study link prediction by means of different methods and evaluation metrics. In addition to using five known methods, we propose two effective weighted similarity measures, and we compare the obtained outcomes in dependence on the selected semantic context of topics on Twitter. We find that hashtag networks yield to a large degree equal results as all-word networks, thus supporting the claim that hashtags alone robustly capture the semantic context of tweets, and as such are useful and suitable for studying the content and categorization. We also introduce ranking diagrams as an efficient tool for the comparison of the performance of different link prediction algorithms across multiple datasets. Our research indicates that successful link prediction algorithms work well in correctly foretelling highly probable links even if the information about a network structure is incomplete, and they do so even if the semantic context is rationalized to hashtags.", "Prediction is one of the most attractive aspects in data mining. Link prediction has recently attracted the attention of many researchers as an effective technique to be used in graph based models in general and in particular for social network analysis due to the recent popularity of the field. Link prediction helps to understand associations between nodes in social communities. Existing link prediction-related approaches described in the literature are limited to predict links that are anticipated to exist in the future. To the best of our knowledge, none of the previous works in this area has explored the prediction of links that could disappear in the future. We argue that the latter set of links are important to know about; they are at least equally important as and do complement the positive link prediction process in order to plan better for the future. In this paper, we propose a link prediction model which is capable of predicting both links that might exist and links that may disappear in the future. The model has been successfully applied in two different though very related domains, namely health care and gene expression networks. The former application concentrates on physicians and their interactions while the second application covers genes and their interactions. We have tested our model using different classifiers and the reported results are encouraging. Finally, we compare our approach with the internal links approach and we reached the conclusion that our approach performs very well in both bipartite and non-bipartite graphs." ] }
1908.02571
2964743966
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
TransE @cite_13 is an embedding model that is popular because of its simplicity and efficiency. It represents the entities in a KG by a relation between the vectors representing them. The score function describing these vectors in TransE is:
{ "cite_N": [ "@cite_13" ], "mid": [ "2127795553" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The first work to explore one communication round consensus in the benign failure model is @cite_10 . The basic protocol in @cite_10 requires @math . That protocol is also extended to support a preferred value which improves the resiliency requirement to @math , similar to our work. The main contribution of this paper compared to @cite_10 is in our exploration of this problem under Byzantine failures and in the fact that we present a single generic optimizer for both failure models.
{ "cite_N": [ "@cite_10" ], "mid": [ "1516351775" ], "abstract": [ "This paper presents a very simple consensus protocol that converges in a single communication step in favorable circumstances. Those situations occur when \"enough\" processes propose the same value. (\"Enough\" means \"at least (n-f)\" where f is the maximum number of processes that can crash in a set of n processes.) The protocol requires f < n 3. It is shown that this requirement is necessary. Moreover, if all the processes that propose a value do propose the same value, the protocol always terminates in one communication step. It is also shown that additional assumptions can help weaken the f < n 3 requirement to f < n 2." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The work of @cite_18 explored simple Byzantine consensus protocols that can terminate in a single communication round whenever all nodes start with the same value and certain failures do not manifest. Yet, the probabilistic protocol of @cite_18 required @math while their deterministic protocol needed @math . In contrast, our optimizer when instantiated to Byzantine failures can withstand up to @math with the classical validity definition (and @math with external validity, which was not explored in @cite_18 ). This is due to biasing the consensus into preferring a certain value. The price that we pay compared to @cite_18 is that if all nodes start with the non-preferable value and the respective failures do not manifest, the protocol in @cite_18 would terminate in a single communication step while our optimizer would have to invoke the full protocol.
{ "cite_N": [ "@cite_18" ], "mid": [ "2030698973" ], "abstract": [ "This paper is on the consensus problem in asynchronous distributed systems where (up to f) processes (among n) can exhibit a Byzantine behavior, i.e., can deviate arbitrarily from their specification. One way to solve the consensus problem in such a context consists of enriching the system with additional oracles that are powerful enough to cope with the uncertainty and unpredictability created by the combined effect of Byzantine behavior and asynchrony. This paper presents two kinds of Byzantine asynchronous consensus protocols using two types of oracles, namely, a common coin that provides processes with random values and a failure detector oracle. Both allow the processes to decide in one communication step in favorable circumstances. The first is a randomized protocol for an oblivious scheduler model that assumes n > 6f. The second one is a failure detector-based protocol that assumes n > tif. These protocols are designed to be particularly simple and efficient in terms of communication steps, the number of messages they generate in each step, and the size of messages. So, although they are not optimal in the number of Byzantine processes that can be tolerated, they are particularly efficient when we consider the number of communication steps they require to decide and the number and size of the messages they use. In that sense, they are practically appealing." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
Traditional deterministic Byzantine consensus protocols, most notably PBFT @cite_15 require at least 3 communication rounds to terminate. Multiple works that reduce this number have been published, each presenting a unique optimization. The Q U work presented a client driven protocol @cite_17 which enables termination in two communication rounds when favorable conditions are met. Yet, its resiliency requirement is @math , compared to our @math for the classical validity and @math for external validity. The HQ work improved the resiliency of Q U to @math , yet does not perform well under high network load @cite_2 . Also, our optimizer is generic whereas Q U and HQ are specialized solutions, each tailored to its intricate protocol.
{ "cite_N": [ "@cite_15", "@cite_2", "@cite_17" ], "mid": [ "2126087831", "2129467152", "2147524598" ], "abstract": [ "This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3 slower than a standard unreplicated NFS.", "There are currently two approaches to providing Byzantine-fault-tolerant state machine replication: a replica-based approach, e.g., BFT, that uses communication between replicas to agree on a proposed ordering of requests, and a quorum-based approach, such as Q U, in which clients contact replicas directly to optimistically execute operations. Both approaches have shortcomings: the quadratic cost of inter-replica communication is un-necessary when there is no contention, and Q U requires a large number of replicas and performs poorly under contention. We present HQ, a hybrid Byzantine-fault-tolerant state machine replication protocol that overcomes these problems. HQ employs a lightweight quorum-based protocol when there is no contention, but uses BFT to resolve contention when it arises. Furthermore, HQ uses only 3f + 1 replicas to tolerate f faults, providing optimal resilience to node failures. We implemented a prototype of HQ, and we compare its performance to BFT and Q U analytically and experimentally. Additionally, in this work we use a new implementation of BFT designed to scale as the number of faults increases. Our results show that both HQ and our new implementation of BFT scale as f increases; additionally our hybrid approach of using BFT to handle contention works well.", "A fault-scalable service can be configured to tolerate increasing numbers of faults without significant decreases in performance. The Query Update (Q U) protocol is a new tool that enables construction of fault-scalable Byzantine fault-tolerant services. The optimistic quorum-based nature of the Q U protocol allows it to provide better throughput and fault-scalability than replicated state machines using agreement-based protocols. A prototype service built using the Q U protocol outperforms the same service built using a popular replicated state machine implementation at all system sizes in experiments that permit an optimistic execution. Moreover, the performance of the Q U protocol decreases by only 36 as the number of Byzantine faults tolerated increases from one to five, whereas the performance of the replicated state machine decreases by 83 ." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The Fast Byzantine Consensus (FaB) protocol was the first protocol to implement a Byzantine consensus protocol that terminates in two communication phases in the normal case while requiring @math @cite_29 . The normal case in @cite_29 is defined as when there is a unique correct leader, all correct acceptors agree on its identity, and the system is in a period of synchrony. This protocol translates into a @math phase state machine replication protocol. Another variant can accommodate @math , where @math is the upper bound on non-leaders suffering Byzantine failures.
{ "cite_N": [ "@cite_29" ], "mid": [ "2058322902" ], "abstract": [ "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes" ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
Zyzzyva is a client driven protocol @cite_24 which terminates after @math communication rounds (including the communication between the client and the replicas) whenever the client receives identical replies from all @math replicas. Our optimizer obtains termination in a single communication round among the replicas even when upto @math of them may crush or be slow. This is by relying on all-to-all communication, and ensuring fast termination only when the preferred value is included in the first @math replies. Also, our optimizer is generic while Zyzzyva and FaB are specialized solutions.
{ "cite_N": [ "@cite_24" ], "mid": [ "2139359217" ], "abstract": [ "We present Zyzzyva, a protocol that uses speculation to reduce the cost and simplify the design of Byzantine fault tolerant state machine replication. In Zyzzyva, replicas respond to a client's request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed. Instead, they optimistically adopt the order proposed by the primary and respond immediately to the client. Replicas can thus become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minimal." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The condition based approach for solving consensus identifies various sets of input values that enable solving consensus fast @cite_7 . This is by treating the set of input values held by all processes as an input vector to the problem. Specifically, the work in @cite_11 showed that when the possible input vectors correspond to error correcting codes, then consensus is solvable in a single communication round regardless of synchrony assumptions.
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2021482754", "2139686511" ], "abstract": [ "This article introduces and explores the condition-based approach to solve the consensus problem in asynchronous systems. The approach studies conditions that identify sets of input vectors for which it is possible to solve consensus despite the occurrence of up to f process crashes. The first main result defines acceptable conditions and shows that these are exactly the conditions for which a consensus protocol exists. Two examples of realistic acceptable conditions are presented, and proved to be maximal, in the sense that they cannot be extended and remain acceptable. The second main result is a generic consensus shared-memory protocol for any acceptable condition. The protocol always guarantees agreement and validity, and terminates (at least) when the inputs satisfy the condition with which the protocol has been instantiated, or when there are no crashes. An efficient version of the protocol is then designed for the message passing model that works when f < n 2, and it is shown that no such protocol exists when f ≥ n 2. It is also shown how the protocol's safety can be traded for its liveness.", "The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
In contrast, Mixture of Experts (MoE) @cite_40 employs a divide-and-conquer strategy where each base-learner, expert, specializes in one part of the problem domain. An additional gating network assesses the relevancy of each expert for a given input, and predicts an associated weight. The ensemble prediction is a weighted average of the experts' outputs. MoE has been trained by minimizing the expected training loss @cite_40 , maximizing the likelihood under a Gaussian mixture model interpretation @cite_40 or using the expectation-maximization (EM) algorithm @cite_41 .
{ "cite_N": [ "@cite_41", "@cite_40" ], "mid": [ "2131320820", "2150884987" ], "abstract": [ "The human brain can be described as containing a number of functional regions. These regions, as well as the connections between them, play a key role in information processing in the brain. However, most existing multi-voxel pattern analysis approaches either treat multiple regions as one large uniform region or several independent regions, ignoring the connections between them. In this paper we propose to model such connections in an Hidden Conditional Random Field (HCRF) framework, where the classifier of one region of interest (ROI) makes predictions based on not only its voxels but also the predictions from ROIs that it connects to. Furthermore, we propose a structural learning method in the HCRF framework to automatically uncover the connections between ROIs. We illustrate this approach with fMRI data acquired while human subjects viewed images of different natural scene categories and show that our model can improve the top-level (the classifier combining information from all ROIs) and ROI-level prediction accuracy, as well as uncover some meaningful connections between ROIs.", "We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
Scene coordinate regression methods @cite_58 @cite_10 @cite_2 @cite_43 @cite_28 @cite_44 @cite_4 @cite_29 @cite_13 @cite_1 also estimate 2D-3D correspondences between image and environment but do so densely for each pixel of the input image. This circumvents the need for a feature detector with the aforementioned draw-backs of feature-based methods. Brachmann al @cite_33 combine a neural network for scene coordinate regression with a differentiable RANSAC for an end-to-end trainable camera re-localization pipeline. Brachmann and Rother @cite_0 improve the pipeline's initialization and differentiable pose optimization to achieve state-of-the-art results for indoor camera re-localization from single RGB images. We build on and extend @cite_33 @cite_1 by combining them with our ESAC framework. Thereby, we are able to address two real-world problems: scalability and ambiguity in camera re-localization. Some scene coordinate regression methods use an ensemble of base learners, namely random forests @cite_58 @cite_2 @cite_43 @cite_28 @cite_44 @cite_29 @cite_13 . Guzman-Rivera al @cite_45 train the random forest in a boosting-like manner to diversify its predictions. Massiceti al @cite_59 map an ensemble of decision trees to an ensemble of neural networks. However, in none of these methods do the base-learners specialize in parts of the problem domain.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_33", "@cite_28", "@cite_29", "@cite_1", "@cite_44", "@cite_43", "@cite_0", "@cite_45", "@cite_59", "@cite_2", "@cite_58", "@cite_10" ], "mid": [ "", "", "2556455135", "", "", "", "", "", "2963856988", "2795645133", "2963053725", "", "1989476314", "" ], "abstract": [ "", "", "RANSAC is an important algorithm in robust optimization and a central building block for many computer vision applications. In recent years, traditionally hand-crafted pipelines have been replaced by deep learning pipelines, which can be trained in an end-to-end fashion. However, RANSAC has so far not been used as part of such deep learning pipelines, because its hypothesis selection procedure is non-differentiable. In this work, we present two different ways to overcome this limitation. The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w.r.t. to all learnable parameters. We call this approach DSAC, the differentiable counterpart of RANSAC. We apply DSAC to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches. We demonstrate that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, we achieve an increase in accuracy. In the future, any deep learning pipeline can use DSAC as a robust optimization component.", "", "", "", "", "", "Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization. In this work, we address the task of predicting the 6D camera pose from a single RGB image in a given 3D environment. With the advent of neural networks, previous works have either learned the entire camera localization process, or multiple components of a camera localization pipeline. Our key contribution is to demonstrate and explain that learning a single component of this pipeline is sufficient. This component is a fully convolutional neural network for densely regressing so-called scene coordinates, defining the correspondence between the input image and the 3D scene space. The neural network is prepended to a new end-to-end trainable pipeline. Our system is efficient, highly accurate, robust in training, and exhibits outstanding generalization capabilities. It exceeds state-of-the-art consistently on indoor and outdoor datasets. Interestingly, our approach surpasses existing techniques even without utilizing a 3D model of the scene during training, since the network is able to discover 3D scene geometry automatically, solely from single-view constraints.", "Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g. 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work. The MapNet project webpage is https: goo.gl mRB3Au.", "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines.", "" ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
In @cite_65 , Brachmann al train a joint classification-regression forest for camera re-localization. The forest classifies which part of the environment an input belongs to, and regresses relative scene coordinates for this part. More recently, image-retrieval and relative pose regression have been combined in one system for good accuracy in @cite_51 . Both works, @cite_65 and @cite_51 , bear some resemblance to our strategy but utilize one large model without the benefit of efficient, conditional computation. Also, their models cannot be trained in an end-to-end fashion.
{ "cite_N": [ "@cite_51", "@cite_65" ], "mid": [ "2963210849", "2894971516" ], "abstract": [ "We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold. First, we develop a new large-scale visual localization method targeted for indoor environments. The method proceeds along three steps: (i) efficient retrieval of candidate poses that ensures scalability to large-scale environments, (ii) pose estimation using dense matching rather than local features to deal with texture less indoor scenes, and (iii) pose verification by virtual view synthesis to cope with significant changes in viewpoint, scene layout, and occluders. Second, we collect a new dataset with reference 6DoF poses for large-scale indoor localization. Query photographs are captured by mobile phones at a different time than the reference 3D map, thus presenting a realistic indoor localization scenario. Third, we demonstrate that our method significantly outperforms current state-of-the-art indoor localization approaches on this new challenging data.", "We present an approach to robust estimation of fundamental matrices from noisy data contaminated by outliers. The problem is cast as a series of weighted homogeneous least-squares problems, where robust weights are estimated using deep networks. The presented formulation acts directly on putative correspondences and thus fits into standard 3D vision pipelines that perform feature extraction, matching, and model fitting. The approach can be trained end-to-end and yields computationally efficient robust estimators. Our experiments indicate that the presented approach is able to train robust estimators that outperform classic approaches on real data by a significant margin." ] }
1908.02402
2965998974
This paper proposes a novel end-to-end architecture for task-oriented dialogue systems. It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot. The policy engine and language generation tasks are modeled jointly following that. The copy-augmented sequential decoder deals with new or unknown values in the conversation, while the multi-label decoder combined with the sequential decoder ensures the explicit assignment of values to slots. On the generation part, slot binary classifiers are used to improve performance. This architecture is scalable to real-world scenarios and is shown through an empirical evaluation to achieve state-of-the-art performance on both the Cambridge Restaurant dataset and the Stanford in-car assistant dataset The code is available at this https URL
Our work is related to end-to-end task-oriented dialogue systems in general [among others] BingNAACL18,Jason17,Lowe18,msr_challenge,BingGoogle17,Pawel18,bordes2016learning,HoriWHWHRHKJZA16,wen2016network,serban2016building and those that extend the Seq2Seq @cite_8 architecture in particular . Belief tracking, which is necessary to form KB queries, is not explicitly performed in the latter works. To compensate, adopt a copy mechanism that allows copying information retrieved from the KB to the generated response. adopt Memory Networks to memorize the retrieved KB entities and words appearing in the dialogue history. These models scale linearly with the size of the KB and need to be retrained at each update of the KB. Both issues make these approaches less practical in real-world applications.
{ "cite_N": [ "@cite_8" ], "mid": [ "2130942839" ], "abstract": [ "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier." ] }
1908.02239
2964358797
A surge in artificial intelligence and autonomous technologies have increased the demand toward enhanced edge-processing capabilities. Computational complexity and size of state-of-the-art Deep Neural Networks (DNNs) are rising exponentially with diverse network models and larger datasets. This growth limits the performance scaling and energy-efficiency of both distributed and embedded inference platforms. Embedded designs at the edge are constrained by energy and speed limitations of available processor substrates and processor to memory communication required to fetch the model coefficients. While many hardware accelerator and network deployment frameworks have been in development, a framework is needed to allow the variety of existing architectures, and those in development, to be expressed in critical parts of the flow that perform various optimization steps. Moreover, premature architecture-blind network selection and optimization diminish the effectiveness of schedule optimizations and hardware-specific mappings. In this paper, we address these issues by creating a cross-layer software-hardware design framework that encompasses network training and model compression that is aware of and tuned to the underlying hardware architecture. This approach leverages the available degrees of DNN structure and sparsity to create a converged network that can be partitioned and efficiently scheduled on the target hardware platform, minimizing data movement, and improving the overall throughput and energy. To further streamline the design, we leverage the high-level, flexible SoC generator platform based on RISC-V ROCC framework. This integration allows seamless extensions of the RISC-V instruction set and Chisel-based rapid generator design. Utilizing this approach, we implemented a silicon prototype in a 16 nm TSMC process node achieving record processing efficiency of up to 18 TOPS W.
the concept of pruning the neural network and exploiting the sparsity has been explored lately either on the general purpose processors @cite_29 @cite_12 @cite_48 @cite_49 or dedicated accelerators. Both static pruning in which the layer weights are compressed, and dynamic pruning with zero-detection of the input activation values have been explored. In both approaches, the unstructured sparse matrix resulting from the pruning of the weights poses a limitation on the speedup and energy saving achieved from the compression technique applied due to the random memory accesses that are required. Although Scalpel @cite_48 takes into account the underlying hardware platform, it only achieves on average 1.25x speedup on GPU with Cusparse library, while structured pruning achieves 4x on the same platform @cite_34 @cite_39 . On the other hand, considering the customized ASIC designs, EIE @cite_36 achieves 5.12x speedup with respect to the GPU, whereas the current APU design we presented reaches up to 80x speedup for a typical fully connected layer.
{ "cite_N": [ "@cite_36", "@cite_48", "@cite_29", "@cite_39", "@cite_49", "@cite_34", "@cite_12" ], "mid": [ "2285660444", "2657126969", "2767785892", "2806364818", "2119144962", "2773134966", "2964080840" ], "abstract": [ "State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.", "As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights. However, we implemented weight pruning for several popular networks on a variety of hardware platforms and observed surprising results. For many networks, the network sparsity caused by weight pruning will actually hurt the overall performance despite large reductions in the model size and required multiply-accumulate operations. Also, encoding the sparse format of pruned networks incurs additional storage space overhead. To overcome these challenges, we propose Scalpel that customizes DNN pruning to the underlying hardware by matching the pruned network structure to the data-parallel hardware organization. Scalpel consists of two techniques: SIMD-aware weight pruning and node pruning. For low-parallelism hardware (e.g., microcontroller), SIMD-aware weight pruning maintains weights in aligned fixed-size groups to fully utilize the SIMD units. For high-parallelism hardware (e.g., GPU), node pruning removes redundant nodes, not redundant weights, thereby reducing computation without sacrificing the dense matrix format. For hardware with moderate parallelism (e.g., desktop CPU), SIMD-aware weight pruning and node pruning are synergistically applied together. Across the microcontroller, CPU and GPU, Scalpel achieves mean speedups of 3.54x, 2.61x, and 1.25x while reducing the model sizes by 88 , 82 , and 53 . In comparison, traditional weight pruning achieves mean speedups of 1.90x, 1.06x, 0.41x across the three platforms.", "Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms. In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization with pruning to create blocks of weights with zeros. Using these techniques, we can create block-sparse RNNs with sparsity ranging from 80 to 90 with a small loss in accuracy. This technique allows us to reduce the model size by roughly 10x. Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count. Our technique works with a variety of block sizes up to 32x32. Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.", "Deep neural networks (DNNs) have become the state-of-the-art technique for machine learning tasks in various applications. However, due to their size and the computational complexity, large DNNs are not readily deployable on edge devices in real-time. To manage complexity and accelerate computation, network compression techniques based on pruning and quantization have been proposed and shown to be effective in reducing network size. However, such network compression can result in irregular matrix structures that are mismatched with modern hardware-accelerated platforms, such as graphics processing units (GPUs) designed to perform the DNN matrix multiplications in a structured (block-based) way. We propose MPDCompress, a DNN compression algorithm based on matrix permutation decomposition via random mask generation. In-training application of the masks molds the synaptic weight connection matrix to a sub-graph separation format. Aided by the random permutations, a hardware-desirable block matrix is generated, allowing for a more efficient implementation and compression of the network. To show versatility, we empirically verify MPDCompress on several network models, compression rates, and image datasets. On the LeNet 300-100 model (MNIST dataset), Deep MNIST, and CIFAR10, we achieve 10 X network compression with less than 1 accuracy loss compared to non-compressed accuracy performance. On AlexNet for the full ImageNet ILSVRC-2012 dataset, we achieve 8 X network compression with less than 1 accuracy loss, with top-5 and top-1 accuracies of 79.6 and 56.4 , respectively. Finally, we observe that the algorithm can offer inference speedups across various hardware platforms, with 4 X faster operation achieved on several mobile GPUs.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "Deep Neural Networks (DNNs) are the key to the state-of-the-art machine vision, sensor fusion and audio video signal processing. Unfortunately, their computation complexity and tight resource constraints on the Edge make them hard to leverage on mobile, embedded and IoT devices. Due to great diversity of Edge devices, DNN designers have to take into account the hardware platform and application requirements during network training. In this work we introduce pruning via matrix pivoting as a way to improve network pruning by compromising between the design flexibility of architecture-oblivious and performance efficiency of architecture-aware pruning, the two dominant techniques for obtaining resource-efficient DNNs. We also describe local and global network optimization techniques for efficient implementation of the resulting pruned networks. In combination, the proposed pruning and implementation result in close to linear speed up with the reduction of network coefficients during pruning.", "Deep Neural Networks (DNNs) have emerged as the method of choice for solving a wide range of machine learning tasks. The enormous computational demand posed by DNNs is a key challenge for computing system designers and has most commonly been addressed through the design of DNN accelerators. However, these specialized accelerators utilize large quantities of multiply-accumulate units and on-chip memory and are prohibitive in area and cost constrained systems such as wearable devices and IoT sensors. In this work, we take a complementary approach and improve the performance of DNNs on general-purpose processor (GPP) cores. We do so by exploiting a key attribute of DNNs, viz. sparsity or the prevalence of zero values. We propose Sparsity-aware Core Extensions (SparCE) - a set of low-overhead micro-architectural and ISA extensions that dynamically detect whether an operand (e.g., the result of a load instruction) is zero and subsequently skip a set of future instructions that use it. To maximize performance benefits, SparCE ensures that the instructions to be skipped are prevented from even being fetched, as squashing instructions comes with a penalty (e.g., a pipeline stall). SparCE consists of 2 key micro-architectural enhancements. First, a Sparsity Register File (SpRF) is utilized to track registers that are zero. Next, a Sparsity-Aware Skip Address (SASA) Table is used to indicate instruction sequences that can be skipped, and to specify conditions on SpRF registers that trigger instruction skipping. When an instruction is fetched, SparCE dynamically pre-identifies whether the following instruction(s) can be skipped, and if so appropriately modifies the program counter, thereby skipping the redundant instructions and improving performance. We model SparCE using the gem5 architectural simulator, and evaluate our approach on 6 state-of-the-art image-recognition DNNs in the context of both training and inference using the Caffe deep learning framework. On a scalar microprocessor, SparCE achieves 1.11×-1.96× speedups across both convolution and fully-connected layers that exhibit 10-90 percent sparsity. These speedups translate to 19-31 percent reduction in execution time at the overall application-level. We also evaluate SparCE on a 4-way SIMD ARMv8 processor using the OpenBLAS library, and demonstrate that SparCE achieves 8-15 percent reduction in the application-level execution time." ] }
1908.01950
2966170251
The importance of wild video based image set recognition is becoming monotonically increasing. However, the contents of these collected videos are often complicated, and how to efficiently perform set modeling and feature extraction is a big challenge for set-based classification algorithms. In recent years, some proposed image set classification methods have made a considerable advance by modeling the original image set with covariance matrix, linear subspace, or Gaussian distribution. As a matter of fact, most of them just adopt a single geometric model to describe each given image set, which may lose some other useful information for classification. To tackle this problem, we propose a novel algorithm to model each image set from a multi-geometric perspective. Specifically, the covariance matrix, linear subspace, and Gaussian distribution are applied for set representation simultaneously. In order to fuse these multiple heterogeneous Riemannian manifoldvalued features, the well-equipped Riemannian kernel functions are first utilized to map them into high dimensional Hilbert spaces. Then, a multi-kernel metric learning framework is devised to embed the learned hybrid kernels into a lower dimensional common subspace for classification. We conduct experiments on four widely used datasets corresponding to four different classification tasks: video-based face recognition, set-based object categorization, video-based emotion recognition, and dynamic scene classification, to evaluate the classification performance of the proposed algorithm. Extensive experimental results justify its superiority over the state-of-the-art.
In image set classification, the covariance matrix, linear subspace and Gaussian distribution are three commonly used Riemannian manifold-valued descriptors for image set description. For covariance matrix, its advantages are the simplicity and flexibility to capture the variations within the set @cite_1 @cite_50 @cite_24 , and for linear subspace, its preponderance stem both from the lower computational cost and from the ability to accommodate the effects of various intra-set variations @cite_37 @cite_19 . In comparison, the strength of Gaussian distribution is that it can describe the set data variations by estimating their first-order statistics and second order statistics simultaneously @cite_44 @cite_3 . The increasing attention and promotion of these three descriptors based image set classification problems manifests in three main factors, which are presented as follows.
{ "cite_N": [ "@cite_37", "@cite_1", "@cite_3", "@cite_24", "@cite_19", "@cite_44", "@cite_50" ], "mid": [ "2126017757", "2144093206", "1928812244", "", "1922045146", "2155608052", "2962772276" ], "abstract": [ "In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.", "We propose a novel discriminative learning approach to image set classification by modeling the image set with its natural second-order statistic, i.e. covariance matrix. Since nonsingular covariance matrices, a.k.a. symmetric positive definite (SPD) matrices, lie on a Riemannian manifold, classical learning algorithms cannot be directly utilized to classify points on the manifold. By exploring an efficient metric for the SPD matrices, i.e., Log-Euclidean Distance (LED), we derive a kernel function that explicitly maps the covariance matrix from the Riemannian manifold to a Euclidean space. With this explicit mapping, any learning method devoted to vector space can be exploited in either its linear or kernel formulation. Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) are considered in this paper for their feasibility for our specific problem. We further investigate the conventional linear subspace based set modeling technique and cast it in a unified framework with our covariance matrix based modeling. The proposed method is evaluated on two tasks: face recognition and object categorization. Extensive experimental results show not only the superiority of our method over state-of-the-art ones in both accuracy and efficiency, but also its stability to two real challenges: noisy set data and varying set size.", "This paper presents a method named Discriminant Analysis on Riemannian manifold of Gaussian distributions (DARG) to solve the problem of face recognition with image sets. Our goal is to capture the underlying data distribution in each set and thus facilitate more robust classification. To this end, we represent image set as Gaussian Mixture Model (GMM) comprising a number of Gaussian components with prior probabilities and seek to discriminate Gaussian components from different classes. In the light of information geometry, the Gaussians lie on a specific Riemannian manifold. To encode such Riemannian geometry properly, we investigate several distances between Gaussians and further derive a series of provably positive definite probabilistic kernels. Through these kernels, a weighted Kernel Discriminant Analysis is finally devised which treats the Gaussians in GMMs as samples and their prior probabilities as sample weights. The proposed method is evaluated by face identification and verification tasks on four most challenging and largest databases, YouTube Celebrities, COX, YouTube Face DB and Point-and-Shoot Challenge, to demonstrate its superiority over the state-of-the-art.", "", "In video based face recognition, great success has been made by representing videos as linear subspaces, which typically lie in a special type of non-Euclidean space known as Grassmann manifold. To leverage the kernel-based methods developed for Euclidean space, several recent methods have been proposed to embed the Grassmann manifold into a high dimensional Hilbert space by exploiting the well established Project Metric, which can approximate the Riemannian geometry of Grassmann manifold. Nevertheless, they inevitably introduce the drawbacks from traditional kernel-based methods such as implicit map and high computational cost to the Grassmann manifold. To overcome such limitations, we propose a novel method to learn the Projection Metric directly on Grassmann manifold rather than in Hilbert space. From the perspective of manifold learning, our method can be regarded as performing a geometry-aware dimensionality reduction from the original Grassmann manifold to a lower-dimensional, more discriminative Grassmann manifold where more favorable classification can be achieved. Experiments on several real-world video face datasets demonstrate that the proposed method yields competitive performance compared with the state-of-the-art algorithms.", "Face recognition on large-scale video in the wild is becoming increasingly important due to the ubiquity of video data captured by surveillance cameras, handheld devices, Internet uploads, and other sources. By treating each video as one image set, set-based methods recently have made great success in the field of video-based face recognition. In the wild world, videos often contain extremely complex data variations and thus pose a big challenge of set modeling for set-based methods. In this paper, we propose a novel Hybrid Euclidean-and-Riemannian Metric Learning (HERML) method to fuse multiple statistics of image set. Specifically, we represent each image set simultaneously by mean, covariance matrix and Gaussian distribution, which generally complement each other in the aspect of set modeling. However, it is not trivial to fuse them since mean, covariance matrix and Gaussian model typically lie in multiple heterogeneous spaces equipped with Euclidean or Riemannian metric. Therefore, we first implicitly map the original statistics into high dimensional Hilbert spaces by exploiting Euclidean and Riemannian kernels. With a LogDet divergence based objective function, the hybrid kernels are then fused by our hybrid metric learning framework, which can efficiently perform the fusing procedure on large-scale videos. The proposed method is evaluated on four public and challenging large-scale video face datasets. Extensive experimental results demonstrate that our method has a clear superiority over the state-of-the-art set-based methods for large-scale video-based face recognition. HighlightsRepresent image set by mean, covariance and Gaussian for discriminant information.Heterogeneous Euclidean and Riemannian kernels are exploited and fused clearly.Clear superiority over state-of-the-art set-based methods is achieved in testing.", "Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods." ] }
1908.01950
2966170251
The importance of wild video based image set recognition is becoming monotonically increasing. However, the contents of these collected videos are often complicated, and how to efficiently perform set modeling and feature extraction is a big challenge for set-based classification algorithms. In recent years, some proposed image set classification methods have made a considerable advance by modeling the original image set with covariance matrix, linear subspace, or Gaussian distribution. As a matter of fact, most of them just adopt a single geometric model to describe each given image set, which may lose some other useful information for classification. To tackle this problem, we propose a novel algorithm to model each image set from a multi-geometric perspective. Specifically, the covariance matrix, linear subspace, and Gaussian distribution are applied for set representation simultaneously. In order to fuse these multiple heterogeneous Riemannian manifoldvalued features, the well-equipped Riemannian kernel functions are first utilized to map them into high dimensional Hilbert spaces. Then, a multi-kernel metric learning framework is devised to embed the learned hybrid kernels into a lower dimensional common subspace for classification. We conduct experiments on four widely used datasets corresponding to four different classification tasks: video-based face recognition, set-based object categorization, video-based emotion recognition, and dynamic scene classification, to evaluate the classification performance of the proposed algorithm. Extensive experimental results justify its superiority over the state-of-the-art.
Manifold Dimensionality Reduction Based Image Set Classification: To circumvent the above problem, some algorithms that jointly perform linear mapping and metric learning directly on the original Riemannian manifold have been suggested recently @cite_50 @cite_19 @cite_53 , and therefore a discriminative lower-dimensional one can be yielded. Harandi . @cite_50 produce a lower-dimensional SPD manifold with an orthogonal mapping obtained by devising a discriminative metric learning framework with respect to the original high-dimensional data. To simplify the computational complexity, Huang . @cite_53 put forward a novel Log-Euclidean metric learning algorithm to form a desirable SPD manifold by directly embedding the tangent space of original SPD manifold into a lower-dimensional one. Similarly, Huang . @cite_19 try to learn a lower-dimensional and more discriminative Grassmannian-valued feature representations for the original high dimensional Grassmann manifold under a devised projection metric learning framework. Thanks to the advantage of fully considering the manifold geometry, the above algorithms show good classification performance. Yet, they also have an inherent design flaw, that is the mapping which is defined and learned on the non-linear Riemannian geometry is linear, which seems to be unreasonable.
{ "cite_N": [ "@cite_19", "@cite_53", "@cite_50" ], "mid": [ "1922045146", "", "2962772276" ], "abstract": [ "In video based face recognition, great success has been made by representing videos as linear subspaces, which typically lie in a special type of non-Euclidean space known as Grassmann manifold. To leverage the kernel-based methods developed for Euclidean space, several recent methods have been proposed to embed the Grassmann manifold into a high dimensional Hilbert space by exploiting the well established Project Metric, which can approximate the Riemannian geometry of Grassmann manifold. Nevertheless, they inevitably introduce the drawbacks from traditional kernel-based methods such as implicit map and high computational cost to the Grassmann manifold. To overcome such limitations, we propose a novel method to learn the Projection Metric directly on Grassmann manifold rather than in Hilbert space. From the perspective of manifold learning, our method can be regarded as performing a geometry-aware dimensionality reduction from the original Grassmann manifold to a lower-dimensional, more discriminative Grassmann manifold where more favorable classification can be achieved. Experiments on several real-world video face datasets demonstrate that the proposed method yields competitive performance compared with the state-of-the-art algorithms.", "", "Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods." ] }
1908.01841
2966573247
Neural dialogue models, despite their successes, still suffer from lack of relevance, diversity, and in many cases coherence in their generated responses. These issues have been attributed to reasons including (1) short-range model architectures that capture limited temporal dependencies, (2) limitations of the maximum likelihood training objective, (3) the concave entropy profile of dialogue datasets resulting into short and generic responses, and (4) out-of-vocabulary problem leading to generation of a large number of @math tokens. Autoregressive transformer models such as GPT-2, although trained with the maximum likelihood objective, do not suffer from the out-of-vocabulary problem and have demonstrated an excellent ability to capture long-range structures in language modeling tasks. In this paper, we examine the use of autoregressive transformer models for multi-turn dialogue response generation. In our experiments, we employ small and medium GPT-2 models (with publicly available pretrained language model parameters) on the open-domain Movie Triples dataset and the closed-domain Ubuntu Dialogue dataset. The models (with and without pretraining) achieve significant improvements over the baselines for multi-turn dialogue response generation. They also produce state-of-the-art performance on the two datasets based on several metrics, including BLEU, ROGUE, and distinct n-gram.
There has been an ongoing effort to drastically improve the performance of dialogue response generation model especially in multi-turn scenarios. In particular, effort has been made to improve the performance of RNN-based models by exploring alternative frameworks such as variational auto-encoding @cite_15 , and generative adversarial networks @cite_35 that simultaneously encourage response relevance and diversity. Despite the improvements provided by these models, the quality of model-generated responses is still much below the human level. Recent work on autoregressive transformer-based language models @cite_9 @cite_34 @cite_2 @cite_21 have however shown an impressive ability to exploit long temporal dependencies in textual data. In this work, we investigate the effectiveness of the long temporal memory capability of autoregressive transformer-based models for multi-turn dialogue modeling. For our experiments, we adopted the GPT-2 autoregressive transformer architecture @cite_2 due to its large sequence length (1024) and 100 the best of our knowledge, there has been no previous work on using autoregressive transformer-based models for dialogue modeling.
{ "cite_N": [ "@cite_35", "@cite_9", "@cite_21", "@cite_2", "@cite_15", "@cite_34" ], "mid": [ "2806935606", "", "2950813464", "", "2418993857", "2911109671" ], "abstract": [ "We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.", "", "With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.", "", "We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens. There are many ways to estimate or learn the high-level coarse tokens, but we argue that a simple extraction procedure is sufficient to capture a wealth of high-level discourse semantics. Such procedure allows training the multiresolution recurrent neural network by maximizing the exact joint log-likelihood over both sequences. In contrast to the standard log- likelihood objective w.r.t. natural language tokens (word perplexity), optimizing the joint log-likelihood biases the model towards modeling high-level abstractions. We apply the proposed model to the task of dialogue response generation in two challenging domains: the Ubuntu technical support domain, and Twitter conversations. On Ubuntu, the model outperforms competing approaches by a substantial margin, achieving state-of-the-art results according to both automatic evaluation metrics and a human evaluation study. On Twitter, the model appears to generate more relevant and on-topic responses according to automatic evaluation metrics. Finally, our experiments demonstrate that the proposed model is more adept at overcoming the sparsity of natural language and is better able to capture long-term structure.", "Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80 longer than RNNs and 450 longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch." ] }
1908.01714
2966789542
In their seminal work on systemic risk in financial markets, Eisenberg and Noe proposed and studied a model with @math firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation -- if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of @math away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems.
To our knowledge, strategic aspects are currently reflected only in models of network formation @cite_10 @cite_9 . A three period economy is assumed where firms can invest into risky assets. To do so, they strategically decide to borrow funds from outside investors as well as other firms. Thereby a network of financial cross-holdings is endogenously formed as each firm maximizes their expected profit. The results show that risk-seeking firms tend to over-connect leading to stronger contagion and systemic risk as compared to the socially optimal risk-sharing allocation. Note that in this case, strategic aspects only play a role in the formation of inter-bank relations whereas the clearing mechanism is assumed to follow the same process as in @cite_3 .
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_3" ], "mid": [ "2160164079", "100186914", "2162337502" ], "abstract": [ "We provide a framework to study the formation of financial networks and investigate the interplay between banks' lending incentives and the emergence of systemic risk. We show that under natural contracting assumptions, banks fail to internalize the implications of their lending decisions for the banks with whom they are not directly contracting, thus establishing the presence of a financial network externality in the process of network formation. We then illustrate how the presence of this externality can function as a channel for the emergence of systemic risk. In particular, we show that (i) banks may \"overlend\" in equilibrium, creating channels over which idiosyncratic shocks can translate into systemic crises via financial contagion; and (ii) they may not spread their lending sufficiently among the set of potential borrowers, creating insufficiently connected financial networks that are excessively prone to contagious defaults. Finally, we show that banks' private incentives may lead to the formation of financial networks that are overly susceptible to systemic meltdowns with some small probability.", "I develop a model of the financial sector in which endogenous intermediation among debt financed banks generates excessive systemic risk. Financial institutions have incentives to capture intermediation spreads through strategic borrowing and lending decisions. By doing so, they tilt the division of surplus along an intermediation chain in their favor, while at the same time reducing aggregate surplus. I show that a core-periphery network -- few highly interconnected and many sparsely connected banks -- endogenously emerges in my model. The network is inefficient relative to a constrained efficient benchmark since banks who make risky investments \"overconnect\", exposing themselves to excessive counterparty risk, while banks who mainly provide funding end up with too few connections. The predictions of the model are consistent with empirical evidence in the literature.", "We consider default by firms that are part of a single clearing mechanism. The obligations of all firms within the system are determined simultaneously in a fashion consistent with the priority of debt claims and the limited liability of equity. We first show, via a fixed-point argument, that there always exists a \"clearing payment vector\" that clears the obligations of the members of the clearing system; under mild regularity conditions, this clearing vector is unique. Next, we develop an algorithm that both clears the financial system in a computationally efficient fashion and provides information on the systemic risk faced by the individual system firms. Finally, we produce qualitative comparative statics for financial systems. These comparative statics imply that, in contrast to single-firm results, even unsystematic, nondissipative shocks to the system will lower the total value of the system and may lower the value of the equity of some of the individual system firms." ] }
1908.01714
2966789542
In their seminal work on systemic risk in financial markets, Eisenberg and Noe proposed and studied a model with @math firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation -- if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of @math away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems.
On a more technical level, our game-theoretic approach is related to a number of existing game-theoretic models based on flows in networks. In cooperative game theory, there are several notions of flow games based on a directed flow network. Existing variants include games, where edges are players @cite_2 @cite_8 @cite_22 @cite_0 @cite_7 @cite_20 , or each player owns a source-sink pair @cite_1 @cite_12 . The total value of a coalition @math is the profit from a maximum (multi-commodity) flow that can be routed through the network if only the players in @math are present. There is a rich set of results on structural characterizations and computability of solutions in the core, as well as other solution concepts for cooperative games. In contrast to our work, these games are non-strategic. Instead, here we consider each player as a single node with a strategic decision about flow allocation.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_1", "@cite_0", "@cite_2", "@cite_12", "@cite_20" ], "mid": [ "2028847614", "", "2064699170", "2125537511", "", "2076900408", "2034600319", "2005690414" ], "abstract": [ "A class of characteristic function games arising from maximum flow problems is introduced and is shown to coincide with the class of totally balanced games. The proof relies on the max flow-min cut theorem of Ford and Fulkerson and on the observation that the class of totally balanced games is the span of the additive games with the minimum operation.", "", "A class of multiperson mathematical optimization problems is considered and is shown to generate cooperative games with nonempty cores. The class includes, but is not restricted to, numerous versions of network flow problems. It was shown by Owen that for games generated by linear programming optimization problems, optimal dual solutions correspond to points in the core. We identify a special class of network flow problems for which the converse is true, i.e., every point in the core corresponds to an optimal dual solution.", "If the Internet is the next great subject for Theoretical Computer Science to model and illuminate mathematically, then Game Theory, and Mathematical Economics more generally, are likely to prove useful tools. In this talk I survey some opportunities and challenges in this important frontier.", "", "A cooperative game in characteristic-function form is obtained by allowing a number of individuals to esercise partial control over the constraints of a (generally nonlinear) mathematical programming problem, either directly or through committee voting. Conditions are imposed on the functions defining the programming problem and the control system which suffice to make the game totally balanced. This assures a nonempty core and hence a stable allocation of the full value of the programming problem among the controlling palyers. In the linear case the core is closely related to the solutions of the dual problem. Applications are made to a variety of economic models, including the transferable utility trading economies of Shapley and Shubik and a multishipper one-commodity transshipment model with convex cost functions and concave revenue functions. Dropping the assumption of transferable utility leads to a class of controlled multiobjective or ‘Pareto programming’ problems, which again yield totally balanced games.", "In citepapa, Papadimitriou formalized the notion of routing stability in BGP as the following coalitional game theoretic problem: Given a network with a multicommodity flow satisfying node capacity and demand constraints, the payoff of a node is the total flow originated or terminated at it. A payoff allocation is in the core if and only if there is no subset of nodes that can increase their payoff by seceding from the network. We answer one of the open problems in citepapa by proving that for any network, the core is non-empty in both the transferable (where the nodes can compensate each other with side payments) and the non-transferable case. In the transferable case we show that such an allocation can be computed in polynomial time. We also generalize this result to the case where a strictly concave utility function is associated with each commodity.", "Preference aggregation is used in a variety of multiagent applications, and as a result, voting theory has become an important topic in multiagent system research. However, power indices (which reflect how much \"real power\" a voter has in a weighted voting system) have received relatively little attention, although they have long been studied in political science and economics. We consider a particular multiagent domain, a threshold network flow game. Agents control the edges of a graph; a coalition wins if it can send a flow that exceeds a given threshold from a source vertex to a target vertex. The relative power of each edge agent reflects its significance in enabling such a flow, and in real-world networks could be used, for example, to allocate resources for maintaining parts of the network. We examine the computational complexity of calculating two prominent power indices, the Banzhaf index and the Shapley-Shubik index, in this network flow domain. We also consider the complexity of calculating the core in this domain. The core can be used to allocate, in a stable manner, the gains of the coalition that is established. We show that calculating the Shapley-Shubik index in this network flow domain is NP-hard, and that calculating the Banzhaf index is #P-complete. Despite these negative results, we show that for some restricted network flow domains there exists a polynomial algorithm for calculating agents' Banzhaf power indices. We also show that computing the core in this game can be performed in polynomial time." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
First, the conventional varying-order Markov models @cite_12 deal with this problem as a discrete-time sequence prediction task. Based on the observed history states sequence, prediction of the event type is given by the most likely state that the state transition process will evolve into on the next step. An obvious limit for the families of Markov models is that they assume the state transition process proceed with unit time-step, it can not capture the temporal dependency of the continuous time and give predictions on the exact time of the next event. Moreover, Markov models can not deal with long dependency of the history events when the event sequence is long, because the size of the state space grow exponentially with the number of the time steps considered in Markov model. It is worth mentioning that semi-Markov models @cite_3 can model continuous time-intervals between two states to some extent, by assuming the intervals to follow some simple distributions, but it still has the state space explosion problem when dealing with long time dependency.
{ "cite_N": [ "@cite_3", "@cite_12" ], "mid": [ "1485657187", "2103960658" ], "abstract": [ "Preface. Part I: Extensions of Basic Models. 1. The Solidarity of Markov Renewal Processes R. Pyke. 2. A Generalization of Semi-Markov Processes M. Iosifescu. 3. Quasi-stationary Phenomena for Semi-Markov Processes M. Gyllenberg, D.S. Silvestrov. 4. Semi-Markov Random Walks V.S. Korolyuk. 5. Diffusion Approximation for Processes with Semi-Markov Switches V.V. Anisimov. 6. Approximations for Semi-Markov Single Ion Channel Models S.M. Pitts. Part II: Statistical Estimation. 7. Log-likelihood in Stochastic Processes G.G. Rousas, D. Bhattacharya. 8. Some Asymptotic Results and Exponential Approximation in Semi-Markov Models G.G. Roussas, D. Bhattacharya. 9. Markov Renewal Processes and Exponential Families V.T. Stefanov. 10. On Homogeneity of Two Semi-Markov Samples L. Afanasyeva, P. Radchenko. 11. Product-Type Estimator of Convolutions I. Gertsbakh, I. Spungin. 12. Failure Rate Estimation of Semi-Markov Systems B. Ouhbi, N. Limnios. 13. Estimation for Semi-Markov Manpower Models in a Stochastic Environment S. McClean, E. Montgomery. 14. Semi-Markov Models for Lifetime Data Analysis R. Perez-Ocon, et al Part III: Non-Homogeneous Models. 15. Continuous Time Non Homogeneous Semi-Markov Systems A.A. Papadopoulou, P.C.G. Vassiliou. 16. The Perturbed Non-Homogeneous Semi-Markov System P.C.G. Vassiliou, H. Tsakiridou. Part IV: Queueing Systems Theory. 17. Semi-Markov Queues with Heavy Tails S. Asmussen.18. MR Modelling of Poisson Traffic at Intersections Having Separate Turn Lanes R. Gideon, R. Pyke. Part V: Financial Models. 19. Stochastic Stability and Optimal Control in Insurance Mathematics A. Swishchuk. 20. Option Pricing with Semi-Markov Volatility J. Janssen, et al Part VI: Controlled Processes & Maintenance. 21. Applications of Semi-Markov Processes in Reliability and Maintenance M. Abdel-Hameed. 22. Controlled Queueing Systems with Recovery Functions T. Dohi, et al Part VII: Chromatography & Fluid Mechanics. 23. Continuous Semi-Markov Models for Chromatography B.P. Harlamov. 24. The Stress Tensor of the Closed Semi-Markov System. Energy and Entropy G.M. Tsaklidis. Index.", "This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a \"decomposed\" CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
Second, Temporal point processes with conditional intensity functions is a more general framework for sequential event data modeling. Temporal Point Process (TPP) is powerful for modeling event sequence with time-stamp in continuous time space. Early work dates back to the Hawkes processes @cite_40 which shows appropriateness for self-exciting and mutual-exciting process like earthquake and its aftershock @cite_15 @cite_26 . As an effective model for event sequence modeling, TPP has widely used in various applications, including data mining tasks e.g. social infectivity learning @cite_37 , conflict analysis @cite_7 , crime modeling @cite_31 , email network analytics @cite_39 and extremal behavior of stock price @cite_23 , and event prediction tasks e.g. failure prediction @cite_21 , sales outcome forecasting @cite_2 , literature citation prediction @cite_11 .
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_31", "@cite_7", "@cite_21", "@cite_39", "@cite_40", "@cite_23", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "2002413290", "", "2168715086", "2120497903", "2090320383", "2336875440", "2069849731", "", "2295897189", "2064758233", "" ], "abstract": [ "In many applications in social network analysis, it is important to model the interactions and infer the influence between pairs of actors, leading to the problem of dyadic event modeling which has attracted increasing interests recently. In this paper we focus on the problem of dyadic event attribution, an important missing data problem in dyadic event modeling where one needs to infer the missing actor-pairs of a subset of dyadic events based on their observed timestamps. Existing works either use fixed model parameters and heuristic rules for event attribution, or assume the dyadic events across actor-pairs are independent. To address those shortcomings we propose a probabilistic model based on mixtures of Hawkes processes that simultaneously tackles event attribution and network parameter inference, taking into consideration the dependency among dyadic events that share at least one actor. We also investigate using additive models to incorporate regularization to avoid overfitting. Our experiments on both synthetic and real-world data sets on international armed conflicts suggest that the proposed new method is capable of significantly improve accuracy when compared with the state-of-the-art for dyadic event attribution.", "", "We discuss a mathematical framework based on a self-exciting point process aimed at analyzing temporal patterns in the series of interaction events between agents in a social network. We then develop a reconstruction model that allows one to predict the unknown participants in a portion of those events. Finally, we apply our results to the Los Angeles gang network.", "Modern conflicts are characterized by an ever increasing use of information and sensing technology, resulting in vast amounts of high resolution data. Modelling and prediction of conflict, however, remain challenging tasks due to the heterogeneous and dynamic nature of the data typically available. Here we propose the use of dynamic spatiotemporal modelling tools for the identification of complex underlying processes in conflict, such as diffusion, relocation, heterogeneous escalation, and volatility. Using ideas from statistics, signal processing, and ecology, we provide a predictive framework able to assimilate data and give confidence estimates on the predictions. We demonstrate our methods on the WikiLeaks Afghan War Diary. Our results show that the approach allows deeper insights into conflict dynamics and allows a strikingly statistically accurate forward prediction of armed opposition group activity in 2010, based solely on data from previous years.", "Massachusetts Institute of Technology and the University of Washington Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time, based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short term prediction of electrical grid failures (“manhole events”), including outages, fires, explosions, and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating, and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulnerability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are i) making continuous-time failure predictions, and ii) cost benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short term horizon, and use to provide a cost benefit analysis of different proactive maintenance programs.", "ABSTRACTWe propose various self-exciting point process models for the times when e-mails are sent between individuals in a social network. Using an expectation–maximization (EM)-type approach, we fit these models to an e-mail network dataset from West Point Military Academy and the Enron e-mail dataset. We argue that the self-exciting models adequately capture major temporal clustering features in the data and perform better than traditional stationary Poisson models. We also investigate how accounting for diurnal and weekly trends in e-mail activity improves the overall fit to the observed network data. A motivation and application for fitting these self-exciting models is to use parameter estimates to characterize important e-mail communication behaviors such as the baseline sending rates, average reply rates, and average response times. A primary goal is to use these features, estimated from the self-exciting models, to infer the underlying leadership status of users in the West Point and Enron network...", "SUMMARY In recent years methods of data analysis for point processes have received some attention, for example, by Cox & Lewis (1966) and Lewis (1964). In particular Bartlett (1963a,b) has introduced methods of analysis based on the point spectrum. Theoretical models are relatively sparse. In this paper the theoretical properties of a class of processes with particular reference to the point spectrum or corresponding covariance density functions are discussed. A particular result is a self-exciting process with the same second-order properties as a certain doubly stochastic process. These are not distinguishable by methods of data analysis based on these properties.", "", "Sales pipeline win-propensity prediction is fundamental to effective sales management. In contrast to using subjective human rating, we propose a modern machine learning paradigm to estimate the win-propensity of sales leads over time. A profile-specific two-dimensional Hawkes processes model is developed to capture the influence from seller's activities on their leads to the win outcome, coupled with lead's personalized profiles. It is motivated by two observations: i) sellers tend to frequently focus their selling activities and efforts on a few leads during a relatively short time. This is evidenced and reflected by their concentrated interactions with the pipeline, including login, browsing and updating the sales leads which are logged by the system; ii) the pending opportunity is prone to reach its win outcome shortly after such temporally concentrated interactions. Our model is deployed and in continual use to a large, global, B2B multinational technology enter-prize (Fortune 500) with a case study. Due to the generality and flexibility of the model, it also enjoys the potential applicability to other real-world problems.", "Abstract This article discusses several classes of stochastic models for the origin times and magnitudes of earthquakes. The models are compared for a Japanese data set for the years 1885–1980 using likelihood methods. For the best model, a change of time scale is made to investigate the deviation of the data from the model. Conventional graphical methods associated with stationary Poisson processes can be used with the transformed time scale. For point processes, effective use of such residual analysis makes it possible to find features of the data set that are not captured in the model. Based on such analyses, the utility of seismic quiescence for the prediction of a major earthquake is investigated.", "" ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
Traditional TPP models are modeled by parametric forms involving manual design of conditional intensity function @math depicting event occurrence rate over time, which measures the instantaneous event occurrence rate at time @math . A few popular examples include: Poisson process @cite_25 : the basic form is history independent @math which can be dated back to the 1900's; Reinforced Poisson processes @cite_19 : the model captures the rich-get-richer' mechanism by @math where @math mimics the aging effect while @math is the accumulation of history events; Self-exciting process (Hawkes process) @cite_16 : it provides an additive model to capture the self-exciting effect from history events @math ; Reactive point process @cite_21 : generalization to the Hawkes process by adding a self-inhibiting term to account for the inhibiting effects from history @math .
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_21", "@cite_25" ], "mid": [ "2145037371", "138372711", "2090320383", "" ], "abstract": [ "The models surveyed include generalized Polya urns, reinforced random walks, interacting urn models, and continuous reinforced processes. Emphasis is on methods and results, with sketches provided of some proofs. Applications are discussed in statistics, biology, economics and a number of other areas.", "IN contagious processes (e.g. measles, hijacking, etc.) the occurrence of events increases the probability of further events occurring in the near future. Also several series of events may interact with each other, for example one might consider notifications of some disease in a number of adjacent regions which would interact through infectives or carriers moving between the regions. In this paper we postulate a model for such processes and derive a general expression for the point spectral matrices. These theoretical spectra are useful for comparison with spectra estimated from data and thus provide a means of evaluating the fit of such a model in the manner of Bartlett (1963). The model studied was put forward in an earlier paper (Hawkes, 1971) but the solution was obtained only in special cases. In this paper an elegant solution is obtained for the. general case. Consider a stationary k-variate point process N(t), where Ni(t) represents the cumulative number of events in the ith process up to time t, with intensity vector X = of dN(t) dt and covariance density matrix", "Massachusetts Institute of Technology and the University of Washington Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time, based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short term prediction of electrical grid failures (“manhole events”), including outages, fires, explosions, and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating, and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulnerability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are i) making continuous-time failure predictions, and ii) cost benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short term horizon, and use to provide a cost benefit analysis of different proactive maintenance programs.", "" ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
One obvious limitation of the above TPP models is that they all assume all the samples obey a single parametric form which is too idealistic for real-world data. By contrast, recurrent neural network (RNN) based models @cite_4 @cite_17 @cite_22 are devised for learning point process. In these works, recurrent neural network (RNN) and its variants e.g. long-short term memory (LSTM) are used for modeling the conditional intensity function over time. More recently attention mechanisms are introduced to improve the interpretability of the neural model @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_17" ], "mid": [ "2740189214", "2509830164", "2605191235", "2569260160" ], "abstract": [ "", "Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions.", "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.", "Many events occur in the world. Some event types are stochastically excited or inhibited---in the sense of having their probabilities elevated or decreased---by patterns in the sequence of previous events. Discovering such patterns can help us predict which type of event will happen next and when. We model streams of discrete events in continuous time, by constructing a neurally self-modulating multivariate point process in which the intensities of multiple event types evolve according to a novel continuous-time LSTM. This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events. Our model has desirable qualitative properties. It achieves competitive likelihood and predictive accuracy on real and synthetic datasets, including under missing-data conditions." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
When dealing with event propagation sequences, a major limitation of these existing studies is that the structural information of the latent graph @math is not utilized. Conventional TPP models including state-of-the-art method in @cite_4 solve event propagation modeling as general event sequences modeling and take input @math , while our GBTPP model leverage the structural information and node proximity of graph @math taking input @math , where @math is the node embedding vector obtained by a graph representation learning method for @math .
{ "cite_N": [ "@cite_4" ], "mid": [ "2509830164" ], "abstract": [ "Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions." ] }