aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1708.04728
2747590145
Deploying deep neural networks on mobile devices is a challenging task. Current model compression methods such as matrix decomposition effectively reduce the deployed model size, but still cannot satisfy real-time processing requirement. This paper first discovers that the major obstacle is the excessive execution time of non-tensor layers such as pooling and normalization without tensor-like trainable parameters. This motivates us to design a novel acceleration framework: DeepRebirth through "slimming" existing consecutive and parallel non-tensor and tensor layers. The layer slimming is executed at different substructures: (a) streamline slimming by merging the consecutive non-tensor and tensor layer vertically; (b) branch slimming by merging non-tensor and tensor branches horizontally. The proposed optimization operations significantly accelerate the model execution and also greatly reduce the run-time memory cost since the slimmed model architecture contains less hidden layers. To maximally avoid accuracy loss, the parameters in new generated layers are learned with layer-wise fine-tuning based on both theoretical analysis and empirical verification. As observed in the experiment, DeepRebirth achieves more than 3x speed-up and 2.5x run-time memory saving on GoogLeNet with only 0.4 drop of top-5 accuracy on ImageNet. Furthermore, by combining with other model compression techniques, DeepRebirth offers an average of 65ms inference time on the CPU of Samsung Galaxy S6 with 86.5 top-5 accuracy, 14 faster than SqueezeNet which only has a top-5 accuracy of 80.5 .
Recently, SqueezeNet @cite_15 has became widely used for its much smaller memory cost and increased speed. However, the near-AlexNet accuracy is far below the state-of-the-art performance. Compared with these two newly networks, our approach has much better accuracy with more significant acceleration. @cite_30 showed that the conv-relu-pool substructure may not be necessary for a neural network architecture. The authors find that max-pooling can simply be replaced by another convolution layer with increased stride without loss in accuracy. Different from this work, replaces a complete substructure (e.g., conv-relu-pool, conv-relu-LRN-pool) with a single convolution layer, and aims to speed-up the model execution on the mobile device. In addition, our work slims a well-trained network by relearning the merged layers and does not require to train from scratch. Essentially, can be considered as a special form of distillation @cite_12 that transfers the knowledge from the cumbersome substructure of multiple layers to the new accelerated substructure.
{ "cite_N": [ "@cite_30", "@cite_15", "@cite_12" ], "mid": [ "2123045220", "2279098554", "" ], "abstract": [ "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "" ] }
1708.04863
2748458152
Agreement plays a central role in distributed systems working on a common task. The increasing size of modern distributed systems makes them more susceptible to single component failures. Fault-tolerant distributed agreement protocols rely for the most part on leader-based atomic broadcast algorithms, such as Paxos. Such protocols are mostly used for data replication, which requires only a small number of servers to reach agreement. Yet, their centralized nature makes them ill-suited for distributed agreement at large scales. The recently introduced atomic broadcast algorithm AllConcur enables high throughput for distributed agreement while being completely decentralized. In this paper, we extend the work on AllConcur in two ways. First, we provide a formal specification of AllConcur that enables a better understanding of the algorithm. Second, we formally prove AllConcur's safety property on the basis of this specification. Therefore, our work not only ensures operators safe usage of AllConcur, but also facilitates the further improvement of distributed agreement protocols based on AllConcur.
Atomic broadcast plays a central role in fault-tolerant distributed systems; for instance, it enables the implementation of both state machine replication @cite_19 @cite_21 and distributed agreement @cite_0 @cite_22 . As a result, the atomic broadcast problem sparked numerous proposals for algorithms @cite_13 . Many of the proposed algorithms rely on a distinguished server (i.e., a leader) to provide total order; yet, the leader may become a bottleneck, especially at large scale. As an alternative, total order can be achieved by destinations agreement @cite_13 @cite_11 @cite_0 . On the one hand, destinations agreement enables decentralized atomic broadcast algorithms; on the other hand, it entails agreement on the set of delivered messages and, thus, it requires consensus.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_0", "@cite_19", "@cite_13", "@cite_11" ], "mid": [ "2042842080", "212436359", "2680467112", "2152465173", "2130264930", "2133943294" ], "abstract": [ "An elastic and highly available data store is a key component of many cloud applications. Existing data stores with strong consistency guarantees are designed and optimized for small updates, key-value access, and (if supported) small range queries over a predefined key column. This raises performance and availability problems for applications which inherently require large updates, non-key access, and large range queries. This paper presents a solution to these problems: Crescando RB; a distributed, scan-based, main memory, relational data store (single table) with robust performance and high availability. The system addresses a real, large-scale industry use case: the Amadeus travel management system. This paper focuses on the distribution layer of Crescando RB, the problem and theory behind it, the rationale underlying key design decisions, and the novel multicast protocol and replication framework it is composed of. Highlighting the key features of the distribution layer, we present experimental results showing that even under permanent node failures and large-scale data repartitioning, Crescando RB remains fully available and capable of sustaining a heavy query and update load.", "A high-pressure furnace for cracking hydrocarbons to produce olefin. Long flame burners produce combustion gases to circulate through radiant and convection sections in a furnace under pressure to crack hydrocarbons. Flue gas from the furnace serves to produce high-pressure steam, provide coolant to quench cracked gas, preheat the hydrocarbon-steam feed and aid in driving a turbine-compressor assembly.", "Many distributed systems require coordination between the components involved. With the steady growth of such systems, the probability of failures increases, which necessitates scalable fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose AllConcur, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In AllConcur, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of AllConcur supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. AllConcur can handle up to 135 million requests per second and achieves 17x higher throughput than today's standard leader-based protocols, such as Libpaxos. Thus, AllConcur is highly competitive with regard to existing solutions and, due to its decentralized approach, enables hitherto unattainable system designs in a variety of fields.", "The state machine approach is a general method for implementing fault-tolerant services in distributed systems. This paper reviews the approach and describes protocols for two different failure models—Byzantine and fail stop. Systems reconfiguration techniques for removing faulty components and integrating repaired components are also discussed.", "Total order broadcast and multicast (also called atomic broadcast multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order.", "We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [ 1992]." ] }
1708.04955
2774797382
A decentralized online quantum cash system, called qBitcoin, is given. We design the system which has great benefits of quantization in the following sense. Firstly, quantum teleportation technology is used for coin transaction, which prevents from the owner of the coin keeping the original coin data even after sending the coin to another. This was a main problem in a classical circuit and a blockchain was introduced to solve this issue. In qBitcoin, the double-spending problem never happens and its security is guaranteed theoretically by virtue of quantum information theory. Making a block is time consuming and the system of qBitcoin is based on a quantum chain, instead of blocks. Therefore a payment can be completed much faster than Bitcoin. Moreover we employ quantum digital signature so that it naturally inherits properties of peer-to-peer (P2P) cash system as originally proposed in Bitcoin.
The attempt to making a money system based on quantum mechanics has a long history. It is believed that Wiesner made a prototype in about 1970 (published in 1983) @cite_5 , in which quantum money that can be verified by a bank is given. In his scheme, quantum money was secure in the sense that it cannot be copied due to the no-cloning theorem, however there were several problems. For example a bank need to maintain a giant data base to store a classical information of quantum money. Aaronson proposed a quantum money scheme where public key was used to verify a banknote @cite_17 and later his scheme was developed in @cite_9 . There is a survey on trying to quantize Bitcoin @cite_21 based on a classical blockchain system and a classical digital signature protocol proposed in @cite_9 . However, all of those works rely on classical digital signature protocols and classical coin transmission system, hence computational hardness assumptions are vital to their systems. In other words, if a computer equipped with ultimate computational ability appears someday, the money systems above are in danger of collapsing, as the bank systems today face.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_21", "@cite_17" ], "mid": [ "", "2949369157", "2342122584", "2162106291" ], "abstract": [ "", "Forty years ago, Wiesner pointed out that quantum mechanics raises the striking possibility of money that cannot be counterfeited according to the laws of physics. We propose the first quantum money scheme that is (1) public-key, meaning that anyone can verify a banknote as genuine, not only the bank that printed it, and (2) cryptographically secure, under a \"classical\" hardness assumption that has nothing to do with quantum money. Our scheme is based on hidden subspaces, encoded as the zero-sets of random multivariate polynomials. A main technical advance is to show that the \"black-box\" version of our scheme, where the polynomials are replaced by classical oracles, is unconditionally secure. Previously, such a result had only been known relative to a quantum oracle (and even there, the proof was never published). Even in Wiesner's original setting -- quantum money that can only be verified by the bank -- we are able to use our techniques to patch a major security hole in Wiesner's scheme. We give the first private-key quantum money scheme that allows unlimited verifications and that remains unconditionally secure, even if the counterfeiter can interact adaptively with the bank. Our money scheme is simpler than previous public-key quantum money schemes, including a knot-based scheme of The verifier needs to perform only two tests, one in the standard basis and one in the Hadamard basis -- matching the original intuition for quantum money, based on the existence of complementary observables. Our security proofs use a new variant of Ambainis's quantum adversary method, and several other tools that might be of independent interest.", "The digital currency Bitcoin has had remarkable growth since it was first proposed in 2008. Its distributed nature allows currency transactions without a central authority by using cryptographic methods and a data structure called the blockchain. In this paper we use the no-cloning theorem of quantum mechanics to introduce Quantum Bitcoin, a Bitcoin-like currency that runs on a quantum computer. We show that our construction of quantum shards and two blockchains allows untrusted peers to mint quantum money without risking the integrity of the currency. The Quantum Bitcoin protocol has several advantages over classical Bitcoin, including immediate local verification of transactions. This is a major improvement since we no longer need the computationally intensive and time-consuming method Bitcoin uses to record all transactions in the blockchain. Instead, Quantum Bitcoin only records newly minted currency which drastically reduces the footprint and increases efficiency. We present formal security proofs for counterfeiting resistance and show that a quantum bitcoin can be re-used a large number of times before wearing out - just like ordinary coins and banknotes. Quantum Bitcoin is the first distributed quantum money system and we show that the lack of a paper trail implies full anonymity for the users. In addition, there are no transaction fees and the system can scale to any transaction volume.", "Forty years ago, Wiesner proposed using quantum states to create money that is physically impossible to counterfeit, something that cannot be done in the classical world. However, Wiesner's scheme required a central bank to verify the money, and the question of whether there can be unclonable quantum money that anyone can verify has remained open since. One can also ask a related question, which seems to be new: can quantum states be used as copy-protected programs, which let the user evaluate some function f, but not create more programs for f? This paper tackles both questions using the arsenal of modern computational complexity. Our main result is that there exist quantum oracles relative to which publicly-verifiable quantum money is possible, and any family of functions that cannot be efficiently learned from its input-output behavior can be quantumly copy-protected. This provides the first formal evidence that these tasks are achievable. The technical core of our result is a \"Complexity-Theoretic No-Cloning Theorem,\" which generalizes both the standard No-Cloning Theorem and the optimality of Grover search, and might be of independent interest. Our security argument also requires explicit constructions of quantum t-designs. Moving beyond the oracle world, we also present an explicit candidate scheme for publicly-verifiable quantum money, based on random stabilizer states; as well as two explicit schemes for copy-protecting the family of point functions. We do not know how to base the security of these schemes on any existing cryptographic assumption. (Note that without an oracle, we can only hope for security under some computational assumption.)" ] }
1708.04871
2748038854
We present SMAUG (Secure Mobile Authentication Using Gestures), a novel biometric assisted authentication algorithm for mobile devices that is solely based on data collected from multiple sensors that are usually installed on modern devices -- touch screen, gyroscope and accelerometer. As opposed to existing approaches, our system supports a fully flexible user input such as free-form gestures, multi-touch, and arbitrary amount of strokes. Our experiments confirm that this approach provides a high level of robustness and security. More precisely, in 77 of all our test cases over all gestures considered, a user has been correctly identified during the first authentication attempt and in 99 after the third attempt, while an attacker has been detected in 97 of all test cases. As an example, gestures that have a good balance between complexity and usability, e.g., drawing a two parallel lines using two fingers at the same time, 100 success rate after three login attempts and 97 impostor detection rate were given. We stress that we consider the strongest possible attacker model: an attacker is not only allowed to monitor the legitimate user during the authentication process, but also receives additional information on the biometric properties, for example pressure, speed, rotation, and acceleration. We see this method as a significant step beyond existing authentication methods that can be deployed directly to devices in use without the need of additional hardware.
With respect to gesture recognition for single-touch gestures, Rubine @cite_30 is the usual reference when comparing new single-touch algorithms. Another prominent example for single-touch and single-stroke gesture recognition is @cite_0 . The authors of @cite_36 present a very efficient follow-up work for single-touch and multi-stroke. In short, the algorithm joins all strokes in all possible combinations and reduces the gesture recognition problem therefore to the case of single-touch gestures. However, this algorithm family needs a predefined set of gestures. The authors in @cite_1 have developed a multi-dimension DTW for gesture recognition. In @cite_35 , the authors present a gesture based user authentication scheme for touch screens using solely the accelerometer. 3D hand gesture recognition in the air with mobile devices and accelerometer is examined in @cite_15 . Similar research was done for Kinect and gesture recognition in @cite_27 , also for Wii @cite_38 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_38", "@cite_36", "@cite_1", "@cite_0", "@cite_27", "@cite_15" ], "mid": [ "2153360297", "2055389916", "2147780311", "1526735218", "", "2097248932", "2402746099", "" ], "abstract": [ "Gesture-Based interfaces offer an alternative to traditional keyboard, menu, and direct manipulation interfaces. The ability to specify objects, an operation, and additional parameters with a single intuitive gesture appeals to both novice and experienced users. Unfortunately, gesture-based interfaces have not been extensively researched, partly because they are difficult to create. This paper describes GRANDMA, a toolkit for rapidly adding gestures to direct manipulation interfaces. The trainable single-stroke gesture recognizer used by GRANDMA is also described.", "With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples.", "The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures or physical manipulation of the devices. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. Unlike statistical methods, uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures and physical manipulations. We evaluate uWave using a large gesture library with over 4000 samples collected from eight users over an elongated period of time for a gesture vocabulary with eight gesture patterns identified by a Nokia research. It shows that uWave achieves 98.6 accuracy, competitive with statistical methods that require significantly more training samples. Our evaluation data set is the largest and most extensive in published studies, to the best of our knowledge. We also present applications of uWave in gesture-based user authentication and interaction with three-dimensional mobile user interfaces using user created gestures.", "Prior work introduced @math 1 recognizer. @math 1 before it. Since then, Protractor has been introduced, a unistroke pen and finger gesture recognition algorithm also based on template-matching and @math 1. This paper presents work to streamline @math N, and negligibly less accurate (<0.2 ). We also discuss the impact that the number of templates, the input speed, and input method (e.g., pen vs. finger) have on recognition accuracy, and examine the most confusable gestures.", "", "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \" @math 1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that @math 1 recognizer to aid development, inspection, extension, and testing.", "Password-based authentication is easy to use but its security is bounded by how much a user can remember. Biometrics-based authentication requires no memorization but ‘resetting’ a biometric password may not always be possible. In this paper, we propose a user-friendly authentication system (KinWrite) that allows users to choose arbitrary, short and easy-to-memorize passwords while providing resilience to password cracking and password theft. KinWrite lets users write their passwords in 3D space and captures the handwriting motion using a low cost motion input sensing device—Kinect. The low resolution and noisy data captured by Kinect, combined with low consistency of in-space handwriting, have made it challenging to verify users. To overcome these challenges, we exploit the Dynamic Time Warping (DTW) algorithm to quantify similarities between handwritten passwords. Our experimental results involving 35 signatures from 18 subjects and a brute-force attacker have shown that KinWrite can achieve a 100 precision and a 70 recall (the worst case) for verifying honest users, encouraging us to carry out a much larger scale study towards designing a foolproof system.", "" ] }
1708.04871
2748038854
We present SMAUG (Secure Mobile Authentication Using Gestures), a novel biometric assisted authentication algorithm for mobile devices that is solely based on data collected from multiple sensors that are usually installed on modern devices -- touch screen, gyroscope and accelerometer. As opposed to existing approaches, our system supports a fully flexible user input such as free-form gestures, multi-touch, and arbitrary amount of strokes. Our experiments confirm that this approach provides a high level of robustness and security. More precisely, in 77 of all our test cases over all gestures considered, a user has been correctly identified during the first authentication attempt and in 99 after the third attempt, while an attacker has been detected in 97 of all test cases. As an example, gestures that have a good balance between complexity and usability, e.g., drawing a two parallel lines using two fingers at the same time, 100 success rate after three login attempts and 97 impostor detection rate were given. We stress that we consider the strongest possible attacker model: an attacker is not only allowed to monitor the legitimate user during the authentication process, but also receives additional information on the biometric properties, for example pressure, speed, rotation, and acceleration. We see this method as a significant step beyond existing authentication methods that can be deployed directly to devices in use without the need of additional hardware.
Continuous authentication means that the device constantly tracks and evaluates the inputs and movements of the user onto the device to authenticate the user. They generally suffer fom privacy loss in some kind. Algorithms can be found in @cite_20 @cite_25 @cite_10 . In @cite_31 , the authors present an attack on the graphical password system of Windows 8. @cite_3 gives an overview of graphical password schemes developed so far. An enhancement for the Android pattern authentication was presented in @cite_29 , which utilizes the accelerometer. The authors of @cite_12 give an authentication algorithm where up to five fingers can be used for multi-touch single-stroke (per finger) in combination with touch screen and accelerometer. Furthermore, they defined the adversary models for mobile gesture recognition based on @cite_27 , which are all weaker than our adversary model. In @cite_18 the authors allow multi-touch and free-form gestures to measure the amount of information of a gesture which can be used for authentication. Finally, @cite_17 presents a multi-touch authentication algorithm for five fingers using touch screen data and a predefined gesture set. In @cite_28 the authors test free-form gesture authentication in non-labor environments.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_29", "@cite_3", "@cite_27", "@cite_12", "@cite_31", "@cite_10", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2952848219", "2399430481", "1976081290", "1964132625", "2402746099", "2113925088", "193079345", "1979467970", "2151854612", "2404603298", "2163582782" ], "abstract": [ "This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We conclude the paper with strategies for generating secure and memorable free-form gestures, which present a robust method for mobile authentication.", "Free-form gesture passwords have been introduced as an alternative mobile authentication method. Text passwords are not very suitable for mobile interaction, and methods such as PINs and grid patterns sacrifice security over usability. However, little is known about how free-form gestures perform in the wild. We present the first field study (N=91) of mobile authentication using free-form gestures, with text passwords as a baseline. Our study leveraged Experience Sampling Methodology to increase ecological validity while maintaining control of the experiment. We found that, with gesture passwords, participants generated new passwords and authenticated faster with comparable memorability while being more willing to retry. Our analysis of the gesture password dataset indicated biases in user-chosen distribution tending towards common shapes. Our findings provide useful insights towards understanding mobile device authentication and gesture-based authentication.", "Password patterns, as used on current Android phones, and other shape-based authentication schemes are highly usable and memorable. In terms of security, they are rather weak since the shapes are easy to steal and reproduce. In this work, we introduce an implicit authentication approach that enhances password patterns with an additional security layer, transparent to the user. In short, users are not only authenticated by the shape they input but also by the way they perform the input. We conducted two consecutive studies, a lab and a long-term study, using Android applications to collect and log data from user input on a touch screen of standard commercial smartphones. Analyses using dynamic time warping (DTW) provided first proof that it is actually possible to distinguish different users and use this information to increase security of the input while keeping the convenience for the user high.", "Beginning around 1996, numerous graphical password schemes have been proposed, motivated by improving password usability and security, two key factors in password scheme evaluation. In this paper, we focus on the security aspects of existing graphical password schemes, which not only gives a simple introduction of attack methods but also intends to provide an in-depth analysis with specific schemes. The paper first categorizes existing graphical password schemes into four kinds according to the authentication style and provides a comprehensive introduction and analysis for each scheme, highlighting security aspects. Then we review the known attack methods, categorize them into two kinds, and summarize the security reported in some user studies of those schemes. Finally, some suggestions are given for future research.", "Password-based authentication is easy to use but its security is bounded by how much a user can remember. Biometrics-based authentication requires no memorization but ‘resetting’ a biometric password may not always be possible. In this paper, we propose a user-friendly authentication system (KinWrite) that allows users to choose arbitrary, short and easy-to-memorize passwords while providing resilience to password cracking and password theft. KinWrite lets users write their passwords in 3D space and captures the handwriting motion using a low cost motion input sensing device—Kinect. The low resolution and noisy data captured by Kinect, combined with low consistency of in-space handwriting, have made it challenging to verify users. To overcome these challenges, we exploit the Dynamic Time Warping (DTW) algorithm to quantify similarities between handwritten passwords. Our experimental results involving 35 signatures from 18 subjects and a brute-force attacker have shown that KinWrite can achieve a 100 precision and a 70 recall (the worst case) for verifying honest users, encouraging us to carry out a much larger scale study towards designing a foolproof system.", "Mobile authentication is indispensable for preventing unauthorized access to multi-touch mobile devices. Existing mobile authentication techniques are often cumbersome to use and also vulnerable to shoulder-surfing and smudge attacks. This paper focuses on designing, implementing, and evaluating TouchIn, a two-factor authentication system on multi-touch mobile devices. TouchIn works by letting a user draw on the touchscreen with one or multiple fingers to unlock his mobile device, and the user is authenticated based on the geometric properties of his drawn curves as well as his behavioral and physiological characteristics. TouchIn allows the user to draw on arbitrary regions on the touchscreen without looking at it. This nice sightless feature makes TouchIn very easy to use and also robust to shoulder-surfing and smudge attacks. Comprehensive experiments on Android devices confirm the high security and usability of TouchIn.", "Various graphical passwords have been proposed as an alternative to traditional alphanumeric passwords and Microsoft has applied a graphical scheme in the operating system Windows 8. As a new type of password scheme, potential security problems such as hot-spots may exist. In this paper, we study user choice in Windows 8 graphical password scheme by both lab and field studies and analyze the hot-spots caused by user choice. Our analysis shows that there are many significant hot-spots in the background image when users set their passwords using Microsoft’s guidance. Then, based on the data of field study, we conducted a simulated human-seeded attack to prove our conclusion. The success rate of 66.69 and 54.46 also provide strong proof of the hot-spots in Windows 8 graphical password scheme. Finally, we designed a simulated automated attack and obtained a success rate of 42.86 .", "", "We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0 for intrasession authentication, 2 -3 for intersession authentication, and below 4 when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system.", "The widespread usage of smartphones gives rise to new security and privacy concerns. Smartphones are becoming a personal entrance to networks, and may store private information. Due to its small size, a smartphone could be easily taken away and used by an attacker. Using a victim’s smartphone, the attacker can launch an impersonation attack, which threatens the security of current networks, especially online social networks. Therefore, it is necessary to design a mechanism for smartphones to re-authenticate the current user’s identity and alert the owner when necessary. Such a mechanism can help to inhibit smartphone theft and safeguard the information stored in smartphones. In this paper, we propose a novel biometric-based system to achieve continuous and unobservable re-authentication for smartphones. The system uses a classifier to learn the owner’s finger movement patterns and checks the current user’s finger movement patterns against the owner’s. The system continuously re-authenticates the current user without interrupting user-smartphone interactions. Experiments show that our system is efficient on smartphones and achieves high accuracy.", "In this paper, we present a novel multi-touch gesture-based authentication technique. We take advantage of the multi-touch surface to combine biometric techniques with gestural input. We defined a comprehensive set of five-finger touch gestures, based upon classifying movement characteristics of the center of the palm and fingertips, and tested them in a user study combining biometric data collection with usability questions. Using pattern recognition techniques, we built a classifier to recognize unique biometric gesture characteristics of an individual. We achieved a 90 accuracy rate with single gestures, and saw significant improvement when multiple gestures were performed in sequence. We found user ratings of a gestures desirable characteristics (ease, pleasure, excitement) correlated with a gestures actual biometric recognition rate - that is to say, user ratings aligned well with gestural security, in contrast to typical text-based passwords. Based on these results, we conclude that multi-touch gestures show great promise as an authentication mechanism." ] }
1708.05071
2746521834
In this paper, we propose to use deep 3-dimensional convolutional networks (3D CNNs) in order to address the challenge of modelling spectro-temporal dynamics for speech emotion recognition (SER). Compared to a hybrid of Convolutional Neural Network and Long-Short-Term-Memory (CNN-LSTM), our proposed 3D CNNs simultaneously extract short-term and long-term spectral features with a moderate number of parameters. We evaluated our proposed and other state-of-the-art methods in a speaker-independent manner using aggregated corpora that give a large and diverse set of speakers. We found that 1) shallow temporal and moderately deep spectral kernels of a homogeneous architecture are optimal for the task; and 2) our 3D CNNs are more effective for spectro-temporal feature learning compared to other methods. Finally, we visualised the feature space obtained with our proposed method using t-distributed stochastic neighbour embedding (T-SNE) and could observe distinct clusters of emotions.
The performance of SER using deep architectures can still be much improved, and an optimal feature set has not been found yet for SER. For example, in @cite_17 @cite_0 @cite_12 , high-level features obtained from off-the-shelf features outperformed conventional methods. However, representation learning using log-spectrogram features did not outperform that of using off-the-shelf features - learning such a complex sequential structure of emotional speech appeared to be hard for representation learning @cite_8 .
{ "cite_N": [ "@cite_0", "@cite_8", "@cite_12", "@cite_17" ], "mid": [ "2408520939", "2512449761", "2749894918", "2295001676" ], "abstract": [ "This paper presents a speech emotion recognition system using a recurrent neural network (RNN) model trained by an efficient learning algorithm. The proposed system takes into account the long-range context effect and the uncertainty of emotional label expressions. To extract high-level representation of emotional states with regard to its temporal dynamics, a powerful learning method with a bidirectional long short-term memory (BLSTM) model is adopted. To overcome the uncertainty of emotional labels, such that all frames in the same utterance are mapped into the same emotional label, it is assumed that the label of each frame is regarded as a sequence of random variables. Then, the sequences are trained by the proposed learning algorithm. The weighted accuracy of the proposed emotion recognition system is improved up to 12 compared to the DNN-ELM based emotion recognition system used as a baseline.", "", "One of the challenges in Speech Emotion Recognition (SER) \"in the wild\" is the large mismatch between training and test data (e.g. speakers and tasks). In order to improve the generalisation capabilities of the emotion models, we propose to use Multi-Task Learning (MTL) and use gender and naturalness as auxiliary tasks in deep neural networks. This method was evaluated in within-corpus and various cross-corpus classification experiments that simulate conditions \"in the wild\". In comparison to Single-Task Learning (STL) based state of the art methods, we found that our MTL method proposed improved performance significantly. Particularly, models using both gender and naturalness achieved more gains than those using either gender or naturalness separately. This benefit was also found in the high-level representations of the feature space, obtained from our method proposed, where discriminative emotional clusters could be observed.", "Abstract Speech emotion recognition is a challenging problem partly be-cause it is unclear what features are effective for the task. Inthis paper we propose to utilize deep neural networks (DNNs)to extract high level features from raw data and show that theyare effective for speech emotion recognition. We first producean emotion state probability distribution for each speech seg-ment using DNNs. We then construct utterance-level featuresfrom segment-level probability distributions. These utterance-level features are then fed into an extreme learning machine(ELM),aspecialsimpleandefficientsingle-hidden-layer neuralnetwork, to identify utterance-level emotions. The experimen-tal results demonstrate that the proposed approach effectivelylearns emotional information from low-level features and leadsto 20 relative accuracy improvement compared to the state-of-the-art approaches. IndexTerms : Emotion recognition, Deep neural networks, Ex-treme learning machine 1. Introduction Despite the great progress made in artificial intelligence, weare still far from being able to naturally interact with machines,partly because machines do not understand our emotion states.Recently, speech emotion recognition, which aims to recognizeemotion states from speech signals, has been drawing increas-ing attention. Speech emotion recognition is a very challengingtask of which extracting effective emotional features is an openquestion [1, 2].Adeep neural network (DNN) is afeed-forward neural net-work that has more than one hidden layers between its inputsand outputs. It is capable of learning high-level representationfrom the raw features and effectively classifying data [3, 4].With sufficient training data and appropriate training strategies,DNNs perform very well in many machine learning tasks (e.g.,speech recognition [5]).Feature analysis in emotion recognition is much less stud-ied than that in speech recognition. Most previous studies em-pirically chose features for emotion classification. In this study,aDNNtakesasinputtheconventional acousticfeatureswithinaspeechsegmentandproducessegment-levelemotionstateprob-ability distributions, from which utterance-level features areconstructed and used to determine the utterance-level emotionstate. Since the segment-level outputs already provide consid-erable emotional information and the utterance-level classifica-" ] }
1708.05071
2746521834
In this paper, we propose to use deep 3-dimensional convolutional networks (3D CNNs) in order to address the challenge of modelling spectro-temporal dynamics for speech emotion recognition (SER). Compared to a hybrid of Convolutional Neural Network and Long-Short-Term-Memory (CNN-LSTM), our proposed 3D CNNs simultaneously extract short-term and long-term spectral features with a moderate number of parameters. We evaluated our proposed and other state-of-the-art methods in a speaker-independent manner using aggregated corpora that give a large and diverse set of speakers. We found that 1) shallow temporal and moderately deep spectral kernels of a homogeneous architecture are optimal for the task; and 2) our 3D CNNs are more effective for spectro-temporal feature learning compared to other methods. Finally, we visualised the feature space obtained with our proposed method using t-distributed stochastic neighbour embedding (T-SNE) and could observe distinct clusters of emotions.
CNN-based methods using low-level features were proposed and outperformed off-the-shelf feature-based methods @cite_6 @cite_16 @cite_11 @cite_1 @cite_26 . @cite_6 @cite_16 @cite_11 2D feature maps were composed of spectrogram features with a fine resolution. However, these 2D CNNs cannot model temporal dependency directly. Instead, LSTM should be followed to model temporal dependencies @cite_11 @cite_1 . Moreover, temporal convolutions can extract spectral features from raw wave signals and capture long-term dependencies @cite_1 . Lastly, CNN-LSTM-DNN was proposed to address frequency variations in spectral domain, long-term dependencies, separation in utterance-level feature space for the task of speech recognition @cite_13 . While these methods augment CNNs and LSTM to handle spectral variations and temporal dynamics, a large number of parameters are required, and it is hard to learn complex dynamics with limited depths. Without these complex memory mechanisms, 3D CNNs could learn temporal features @cite_21 @cite_7 . In @cite_21 @cite_7 , a series of human's motion was modelled by 3D CNNs, it empirically turned out that 3D CNNs are not only effective but also efficient to capture spatio-temporal features.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_21", "@cite_1", "@cite_6", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2765998482", "2952633803", "", "2399733683", "2181741066", "2087618018", "1600744878", "" ], "abstract": [ "Deep architectures using identity skip-connections have demonstrated groundbreaking performance in the field of image classification. Recently, empirical studies suggested that identity skip-connections enable ensemble-like behaviour of shallow networks, and that depth is not a solo ingredient for their success. Therefore, we examine the potential of identity skip-connections for the task of Speech Emotion Recognition (SER) where moderately deep temporal architectures are often employed. To this end, we propose a novel architecture which regulates unimpeded feature flows and captures long-term dependencies via gate-based skip-connections and a memory mechanism. Our proposed architecture is compared to other state-of-the-art methods of SER and is evaluated on large aggregated corpora recorded in different contexts. Our proposed architecture outperforms the state-of-the-art methods by 9 - 15 and achieves an Unweighted Accuracy of 80.5 in an imbalanced class distribution. In addition, we examine a variant adopting simplified skip-connections of Residual Networks (ResNet) and show that gate-based skip-connections are more effective than simplified skip-connections.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "", "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of ‘context-aware’ emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.", "Speech emotion recognition (SER) is a challenging task since it is unclear what kind of features are able to reflect the characteristics of human emotion from speech. However, traditional feature extractions perform inconsistently for different emotion recognition tasks. Obviously, different spectrogram provides information reflecting difference emotion. This paper proposes a systematical approach to implement an effectively emotion recognition system based on deep convolution neural networks (DCNNs) using labeled training audio data. Specifically, the log-spectrogram is computed and the principle component analysis (PCA) technique is used to reduce the dimensionality and suppress the interferences. Then the PCA whitened spectrogram is split into non-overlapping segments. The DCNN is constructed to learn the representation of the emotion from the segments with labeled training speech data. Our preliminary experiments show the proposed emotion recognition system based on DCNNs (containing 2 convolution and 2 pooling layers) achieves about 40 classification accuracy. Moreover, it also outperforms the SVM based classification using the hand-crafted acoustic features.", "As an essential way of human emotional behavior understanding, speech emotion recognition (SER) has attracted a great deal of attention in human-centered signal processing. Accuracy in SER heavily depends on finding good affect- related , discriminative features. In this paper, we propose to learn affect-salient features for SER using convolutional neural networks (CNN). The training of CNN involves two stages. In the first stage, unlabeled samples are used to learn local invariant features (LIF) using a variant of sparse auto-encoder (SAE) with reconstruction penalization. In the second step, LIF is used as the input to a feature extractor, salient discriminative feature analysis (SDFA), to learn affect-salient, discriminative features using a novel objective function that encourages feature saliency, orthogonality, and discrimination for SER. Our experimental results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and language variation, and environment distortion) and outperforms several well-established SER features.", "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4–6 relative improvement in WER over an LSTM, the strongest of the three individual models.", "" ] }
1708.05905
2749529790
Internet of Things (IoT) systems have aroused enthusiasm and concerns. Enthusiasm comes from their utilities in people daily life, and concerns may be associated with privacy issues. By using two IoT systems as case-studies, we examine users' privacy beliefs, concerns and attitudes. We focus on four major dimensions: the collection of personal data, the inference of new information, the exchange of information to third parties, and the risk-utility trade-off posed by the features of the system. Altogether, 113 Brazilian individuals answered a survey about such dimensions. Although their perceptions seem to be dependent on the context, there are recurrent patterns. Our results suggest that IoT users can be classified into unconcerned, fundamentalists and pragmatists. Most of them exhibit a pragmatist profile and believe in privacy as a right guaranteed by law. One of the most privacy concerning aspect is the exchange of personal information to third parties. Individuals' perceived risk is negatively correlated with their perceived utility in the features of the system. We discuss practical implications of these results and suggest heuristics to cope with privacy concerns when designing IoT systems.
From a legislation viewpoint, considering the laws in the United States of America, privacy can be defined as the right of an individual to be let alone'' @cite_6 . People, in turn, usually associate the word privacy with a diversity of meanings. Some people believe that privacy is the right to control what information about them may be made public @cite_21 @cite_2 @cite_19 . Other people believe that if someone cares about privacy is because he she is involved in wrongdoing @cite_9 . Privacy is also associated with the states of solitude, intimacy, anonymity, and reserve @cite_1 @cite_19 . Solitude means the physical separation from other individuals. Intimacy is some kind of close relationship between individuals with which information is exchanged. Anonymity is the state of freedom from identification and surveillance. Finally, reserve means the creation of psychological protection against intrusion by other unwanted individuals.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_2" ], "mid": [ "2097254487", "1968874563", "2117283701", "1528664920", "1996458336", "2075398096" ], "abstract": [ "Ubicomp researchers have long argued that privacy is a design issue, and it goes without saying that successful design requires that we understand the desires, concerns, and awareness of the technology's users. Yet, because ubicomp systems are relatively unusual, too little empirical research exists to inform designers about potential users. Complicating design further is the fact that ubicomp systems are typically embedded or invisible, making it difficult for users to know when invisible devices are present and functioning. As early as 1993, ubicomp researchers recognized that embedded technology's unobtrusiveness both belies and contributes to its potential for supporting potentially invasive applications. Not surprisingly, users' inability to see a technology makes it difficult for them to understand how it might affect their privacy. Unobtrusiveness, nevertheless, is a reasonable goal because such systems must minimize the demands on users. To investigate these issues further, I conducted an ethnographic study of what I believe is the first US eldercare facility to use a sensor-rich environment. Our subjects were normal civilians (rather than ubicomp researchers) who lived or worked in a ubiquitous computing environment. We interviewed residents, their family members, and the facility's caregivers and managers. Our questions focused on how people understood both the ubiquitous technology and its effect on their privacy. Although the embedded technology played a central role in how people viewed the environment, they had a limited understanding of the technology, thus raising several privacy, design, and safety issues.", "We examined factors that influence an individual's attitude and decisions about the information handling practices of corporations. Results from a survey of 425 consumers suggested that the hypothesized model was an accurate reflection of factors that affect privacy preferences of consumers. The results provide important implications for research and practice. Our study should contribute by initiating an integrative stream of research on the impact of IT and other factors on information privacy perception. For practitioners, our findings suggested that consumers hold corporations, not the IS, responsible for any inappropriate use of personal information. Organizations, therefore, must be proactive in formulating and enforcing information privacy policy in order to address consumers' concerns.", "Digitization of society raises concerns about privacy. This article first describes privacy threats of life-logging. It gives the technically novice reader a quick overview of what information and communication technology (ICT) is currently preparing for society, based on state-of-the art research in the industry laboratories: ubiquitous computing, aware environments, the Internet of Things, and so on. We explain how geolocation systems work and how they can provide detailed accounts of personal activity that will deeply affect privacy. At present, system designers rarely implement privacy-enhancing technologies — we explain why, based on empirical research. On the other hand, users, while expressing concern, do not protect themselves in practice — we list reasons for this. The problem is complex because the very nature of identity and social relations works against protecting personal data; this is the privacy dilemma. At least two key mechanisms in the production of good interaction and in the construct...", "hat the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection. Political, social, and economic changes entail the recognition of new rights, and the common law, in its eternal youth, grows to meet the new demands of society. Thus, in very early times, the law gave a remedy only for physical interference with life and property, for trespasses vi et armis. Then the \"right to life\" served only to protect the subject from battery in its various forms; liberty meant freedom from actual restraint; and the right to property secured to the individual his lands and his cattle. Later, there came a recognition of man's spiritual nature, of his feelings and his intellect. Gradually the scope of these legal rights broadened; and now the right to life has come to mean the right to enjoy life, -the right to be let alone; the right to liberty secures the exercise of extensive civil privileges; and the term \"property\" has grown to comprise every form of possession -intangible, as well as tangible.", "With the rapid diffusion of the Internet, researchers, policy makers, and users have raised concerns about online privacy, although few studies have integrated aspects of usage with psychological and attitudinal aspects of privacy. This study develops a model involving gender, generalized self-efficacy, psychological need for privacy, Internet use experience, Internet use fluency, and beliefs in privacy rights as potential influences on online privacy concerns. Survey responses from 413 college students were analyzed by bivariate correlations, hierarchical regression, and structural equation modeling. Regression results showed that beliefs in privacy rights and a psychological need for privacy were the main influences on online privacy concerns. The proposed structural model was not well supported by the data, but a revised model, linking self-efficacy with psychological need for privacy and indicating indirect influences of Internet experience and fluency on online privacy concerns about privacy through beliefs in privacy rights, was supported by the data. © 2007 Wiley Periodicals, Inc.", "Abstract This study summarizes the development and validation of a multidimensional privacy orientation scale designed for measuring privacy attitudes of Social Network Site (SNS) users. Findings confirm the existence of four dimensions: (1) belief in the value of “privacy as a right”; (2) “other-contingent privacy”; (3) “concern about own informational privacy” and (4) “concern about privacy of others.” Moreover, a segmentation of SNS users based on these attitude scores reveals three types of users: (1) privacy advocates, who are concerned about both their own and other people’s privacy; (2) privacy individualists, who are concerned mostly about their own privacy, and (3) privacy indifferents, whose score on all dimensions are lower than other segments. The results indicate that the four privacy orientation dimensions and three user segments predict key differences in terms of privacy protective behavior, information disclosure, and viewing personal information of others." ] }
1708.05905
2749529790
Internet of Things (IoT) systems have aroused enthusiasm and concerns. Enthusiasm comes from their utilities in people daily life, and concerns may be associated with privacy issues. By using two IoT systems as case-studies, we examine users' privacy beliefs, concerns and attitudes. We focus on four major dimensions: the collection of personal data, the inference of new information, the exchange of information to third parties, and the risk-utility trade-off posed by the features of the system. Altogether, 113 Brazilian individuals answered a survey about such dimensions. Although their perceptions seem to be dependent on the context, there are recurrent patterns. Our results suggest that IoT users can be classified into unconcerned, fundamentalists and pragmatists. Most of them exhibit a pragmatist profile and believe in privacy as a right guaranteed by law. One of the most privacy concerning aspect is the exchange of personal information to third parties. Individuals' perceived risk is negatively correlated with their perceived utility in the features of the system. We discuss practical implications of these results and suggest heuristics to cope with privacy concerns when designing IoT systems.
In information and communications technology (ICT), the concept of privacy is usually associated to the degree of control over the flow of personal information @cite_15 . In this context, people associate privacy to something regarding to their level of control over the collection of personal information, and usage of the collected information, and the third parties that can have access to the information, such as relatives, friends, hierarchical superiors, and government agencies @cite_32 @cite_15 @cite_33 .
{ "cite_N": [ "@cite_15", "@cite_32", "@cite_33" ], "mid": [ "1550170638", "2529920474", "2110281148" ], "abstract": [ "This study investigates the composition of consumer’s security and privacy perceptions of mobile commerce (m-commerce) and the factors shaping these security and privacy perceptions. Based on literature review, we examined the effect of eight determinants: information type, information collection, secondary use of information, error, unauthorized access, location awareness, information transfer, and personalization; on security and privacy concerns in the m-commerce context. Analysis of data from 141 respondents revealed three dimensions for the security and privacy perception construct. Hence, three models were tested to address the impacts of these factors on the three dimensions: consumers’ confidence of information control, concerns on third party, and the awareness of information protection in the mcommerce context. The study has implications for professionals to meet the consumers’ requirements and expectations on security and privacy for m-commerce.", "", "Privacy concerns are identified as one of the main factors that have a negative impact on Internet users' online behaviour. Often, Internet users do not have confidence that a web site will ensure their privacy either in collection nor in future usage of their personal information. In this article we propose a categorization of factors that can influence users' privacy perception during their online activity. Furthermore, we report on a research model for Internet users' privacy perception, and a pilot study performed among online shopping Internet banking users." ] }
1708.05905
2749529790
Internet of Things (IoT) systems have aroused enthusiasm and concerns. Enthusiasm comes from their utilities in people daily life, and concerns may be associated with privacy issues. By using two IoT systems as case-studies, we examine users' privacy beliefs, concerns and attitudes. We focus on four major dimensions: the collection of personal data, the inference of new information, the exchange of information to third parties, and the risk-utility trade-off posed by the features of the system. Altogether, 113 Brazilian individuals answered a survey about such dimensions. Although their perceptions seem to be dependent on the context, there are recurrent patterns. Our results suggest that IoT users can be classified into unconcerned, fundamentalists and pragmatists. Most of them exhibit a pragmatist profile and believe in privacy as a right guaranteed by law. One of the most privacy concerning aspect is the exchange of personal information to third parties. Individuals' perceived risk is negatively correlated with their perceived utility in the features of the system. We discuss practical implications of these results and suggest heuristics to cope with privacy concerns when designing IoT systems.
Concerns about privacy usually arise from unauthorized collection of personal data, unauthorized secondary use of the data, errors in personal data, and improper access to personal data @cite_22 . People concerns are indeed associated to possible consequences that these occurrences may have on their lives. Two relevant theoretical constructs that explore this view are face keeping and information boundary.
{ "cite_N": [ "@cite_22" ], "mid": [ "1545190392" ], "abstract": [ "Information privacy has been called one of the most important ethical issues of the informa1 Alien Lee was the accepting senior for this paper. tion age. Public opinion polls show rising levels of concer about privacy among Americans. Against this backdrop, research into issues associated with information privacy is increasing. Based on a number of preliminary studies, it has become apparent that organizational practices, individuals' perceptions of these practices, and societ al responses are inextricably linked in many ways. Theories regarding these relationships are slowly emerging. Unfortunately, researchers attempting to examine such relationships through confirmatory empirical approaches may be impeded by the lack of validated instruments for measuring individuals' concerns about organizational information privacy practices. To enable future studies in the information privacy research stream, we developed and validated an instrument that identifies and measures the primary dimensions of individuals' concers about organizational information privacy practices. The development process included examinations of privacy literature; experience surveys and focus groups; and the use of expert judges. The result was a parsimonious 15-item instrument with four subscales tapping into dimensions of individuals' concerns about organizational information privacy practices. The instrument was rigorously tested and validated across several heterogenous populations, providing a high degree of confidence in the scales' validity, reliability, and generalizability." ] }
1708.05870
2748257817
We address a fundamental question in wireless networks that, surprisingly, has not been studied before: what is the maximum density of concurrently active links that satisfy a certain outage constraint? We call this quantity the spatial outage capacity (SOC), give a rigorous definition, and analyze it for Poisson bipolar networks with ALOHA. Specifically, we provide exact analytical and approximate expressions for the density of links satisfying an outage constraint and give simple upper and lower bounds on the SOC. In the high-reliability regime where the target outage probability is close to zero, we obtain an exact closed-form expression of the SOC, which reveals the interesting and perhaps counter-intuitive result that all transmitters need to be always active to achieve the SOC, i.e., the transmit probability needs to be set to 1 to achieve the SOC.
For Poisson bipolar networks, the mean success probability @math is calculated in @cite_1 and @cite_12 . For ad hoc networks modeled by the Poisson point process (PPP), the link success probability @math is studied in @cite_9 , where the focus is on the mean local delay, i.e. , the @math st moment of @math in our notation. The notion of the (TC) is introduced in @cite_11 , which is defined as the maximum density of successful transmissions provided the outage probability of the typical user stays below a predefined threshold @math . While the results obtained in @cite_11 are certainly important, the TC does not represent the maximum density of successful transmissions for the target outage probability, as claimed in @cite_11 , since the metric implicitly assumes that each link in a realization of the network is typical.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_12", "@cite_11" ], "mid": [ "2168164497", "2106334285", "2132987440", "2095796369" ], "abstract": [ "We study a slotted version of the Aloha Medium Access (MAC) protocol in a Mobile Ad-hoc Network (MANET). Our model features transmitters randomly located in the Euclidean plane, according to a Poisson point process and a set of receivers representing the next-hop from every transmitter. We concentrate on the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise (SINR) larger than some threshold. We analyze the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters (receiver distance, thermal noise, Aloha medium access probability) are below a threshold and infinite above. To the best of our knowledge, this phenomenon, which we propose to call the wireless contention phase transition, has not been discussed in the literature. We comment on the relationships between the above facts and the heavy tails found in the so-called \"RESTART\" algorithm. We argue that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the \"RESTART\" mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers another nice way of breaking the outage RESTART logic. We show examples where the average delays are finite in the adaptive coding case, whereas they are infinite in the outage case.", "The evaluation of optimum transmission ranges in a packet radio network in a fading and shadowing environment is considered. It is shown that the optimal probability of transmission of each user is independent of the system model and is p sub o spl sime 0.271. The optimum range should be chosen so that on the average there are spl chi (G b) sup 2 spl eta terminals closer to the transmitter than the receiver, where G is the spread spectrum processing gain, b is the outage signal-to-noise ratio threshold, spl eta is the power loss factor and spl chi depends on the system parameters and the propagation model. The performance index is given in terms of the optimal normalized expected progress per slot, given by spl thetav (G b) sup 1 spl eta where spl thetav is proportional to the square root of spl chi . A comparison with the results obtained by using deterministic propagation models shows, for typical values of fading and shadowing parameters, a reduction up to 40 of the performance index. >", "An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.", "In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes." ] }
1708.05870
2748257817
We address a fundamental question in wireless networks that, surprisingly, has not been studied before: what is the maximum density of concurrently active links that satisfy a certain outage constraint? We call this quantity the spatial outage capacity (SOC), give a rigorous definition, and analyze it for Poisson bipolar networks with ALOHA. Specifically, we provide exact analytical and approximate expressions for the density of links satisfying an outage constraint and give simple upper and lower bounds on the SOC. In the high-reliability regime where the target outage probability is close to zero, we obtain an exact closed-form expression of the SOC, which reveals the interesting and perhaps counter-intuitive result that all transmitters need to be always active to achieve the SOC, i.e., the transmit probability needs to be set to 1 to achieve the SOC.
A version of the TC based on the link success probability distribution is introduced in @cite_8 , but it does not consider a MAC scheme, , all nodes always transmit ( @math ). The choice of @math is important as it greatly affects the link success probability distribution as shown in Fig. . In this paper, we consider the general case with the transmit probability @math .
{ "cite_N": [ "@cite_8" ], "mid": [ "2545435035" ], "abstract": [ "In this paper we consider a network where the nodes locations are modeled by a realization of a Poisson point process and remains fixed or changes very slowly over time. Most of the literature focuses on the spatial average of the link outage probabilities. But each link in the network has an associated link-outage probability that depends on the fading, path loss, and the relative locations of the interfering nodes. Since the node locations are random, the outage probability of each link is a random variable and in this paper we obtain its distribution, instead of just the spatial average. This work supplements the existing results which focus mainly on the average outage probability averaged over space. We propose a new notion of transmission capacity (TC) based on the outage distribution, and provide asymptotically tight bounds for the TC." ] }
1708.05870
2748257817
We address a fundamental question in wireless networks that, surprisingly, has not been studied before: what is the maximum density of concurrently active links that satisfy a certain outage constraint? We call this quantity the spatial outage capacity (SOC), give a rigorous definition, and analyze it for Poisson bipolar networks with ALOHA. Specifically, we provide exact analytical and approximate expressions for the density of links satisfying an outage constraint and give simple upper and lower bounds on the SOC. In the high-reliability regime where the target outage probability is close to zero, we obtain an exact closed-form expression of the SOC, which reveals the interesting and perhaps counter-intuitive result that all transmitters need to be always active to achieve the SOC, i.e., the transmit probability needs to be set to 1 to achieve the SOC.
The meta distribution @math for Poisson bipolar networks with ALOHA and cellular networks is calculated in @cite_10 , where a closed-form expression for the moments of @math is obtained, and an exact integral expression and simple bounds on @math are provided. A key result in @cite_10 is that, for constant transmitter density @math , as the Poisson bipolar network becomes very dense ( @math ) with a very small transmit probability ( @math ), the disparity among link success probabilities vanishes and all links have the same success probability, which is the mean success probability @math . For the Poisson cellular network, the meta distribution of the SIR is calculated for the downlink and uplink scenarios with fractional power control in @cite_14 , with base station cooperation in @cite_5 , and for D2D networks underlaying the cellular network (downlink) in @cite_3 . Furthermore, the meta distribution of the SIR is calculated for millimeter-wave D2D networks in @cite_15 and for D2D networks with interference cancellation in @cite_6 . * -1mm
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_6", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "2963262497", "2605261135", "2755230429", "2769047447", "", "1640283668" ], "abstract": [ "The meta distribution of the signal-to-interference ratio (SIR) provides fine-grained information about the performance of individual links in a wireless network. This paper focuses on the analysis of the meta distribution of the SIR for both the cellular network uplink and downlink with fractional power control. For the uplink scenario, an approximation of the interfering user point process with a non-homogeneous Poisson point process is used. The moments of the meta distribution for both scenarios are calculated. Some bounds, the analytical expression, the mean local delay, and the beta approximation of the meta distribution are provided. The results give interesting insights into the effect of the power control in both the uplink and downlink. Detailed simulations show that the approximations made in the analysis are well justified.", "We study the performance of device-to-device (D2D) communication underlaying cellular wireless network in terms of the meta distribution of the signal-to-interference ratio (SIR), which is the distribution of the conditional SIR distribution given the locations of the wireless nodes. Modeling D2D transmitters and base stations as Poisson point processes (PPPs), moments of the conditional SIR distribution are derived in order to calculate analytical expressions for the meta distribution and the mean local delay of the typical D2D receiver and cellular downlink user. It turns out that for D2D users, the total interference from the D2D interferers and base stations is equal in distribution to that of a single PPP, while for downlink users, the effect of the interference from the D2D network is more complicated. We also derive the region of transmit probabilities for the D2D users and base stations that result in a finite mean local delay and give a simple inner bound on that region. Finally, the impact of increasing the base station density on the mean local delay, the meta distribution, and the density of users reliably served is investigated with numerical results.", "This letter presents a theoretical framework for the analysis of the meta distribution of the SIR for Poisson networks with interference cancellation (IC) enabled at the receivers, which gives deep insight into the network performance on a link-wise basis. A simple but insightful IC model named C-IC is studied for which the exact @math th moment of the meta distribution and its beta distribution approximation and some useful bounds are validated. The conditions for the mean local delay to be finite are also stated. The results show that IC improves the performance not only in terms of the mean but also in terms of the variance of the meta distribution.", "The meta distribution provides fine-grained information on the signal-to-interference ratio (SIR) compared with the SIR distribution at the typical user. This paper first derives the meta distribution of the SIR in heterogeneous cellular networks with downlink coordinated multipoint transmission reception, including joint transmission (JT), dynamic point blanking (DPB), and dynamic point selection dynamic point blanking (DPS DPB), for the general typical user and the worst-case user (the typical user located at the Voronoi vertex in a single-tier network). A more general scheme called JT-DPB, which is the combination of JT and DPB, is studied. The moments of the conditional success probability are derived for the calculation of the meta distribution and the mean local delay. An exact analytical expression, the beta approximation, and simulation results of the meta distribution are provided. From the theoretical results, we gain insights on the benefits of different cooperation schemes and the impact of the number of cooperating base stations and other network parameters.", "", "The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper focuses on the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as “What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?” Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to 0 while keeping the mean success probability constant." ] }
1708.05768
2750396241
We consider the analysis of high dimensional data given in the form of a matrix with columns consisting of observations and rows consisting of features. Often the data is such that the observations do not reside on a regular grid, and the given order of the features is arbitrary and does not convey a notion of locality. Therefore, traditional transforms and metrics cannot be used for data organization and analysis. In this paper, our goal is to organize the data by defining an appropriate representation and metric such that they respect the smoothness and structure underlying the data. We also aim to generalize the joint clustering of observations and features in the case the data does not fall into clear disjoint groups. For this purpose, we propose multiscale data-driven transforms and metrics based on trees. Their construction is implemented in an iterative refinement procedure that exploits the co-dependencies between features and observations. Beyond the organization of a single dataset, our approach enables us to transfer the organization learned from one dataset to another and to integrate several datasets together. We present an application to breast cancer gene expression analysis: learning metrics on the genes to cluster the tumor samples into cancer sub-types and validating the joint organization of both the genes and the samples. We demonstrate that using our approach to combine information from multiple gene expression cohorts, acquired by different profiling technologies, improves the clustering of tumor samples.
This work is also related to the matrix factorization proposed by @cite_10 , where the graph Laplacians of both the features and the observation regularize the decomposition of a dataset into a low-rank matrix and a sparse matrix representing noise. Then the observations are clustered using k-means on the low-dimensional principal components of the smooth low-rank matrix. Our work differs in that we preform an iterative embedding of the observations and features, not jointly, but alternating between the two while updating the graph Laplacian of each in turn. In addition, we provide a clustering of the data.
{ "cite_N": [ "@cite_10" ], "mid": [ "2161041254" ], "abstract": [ "Mining useful clusters from high dimensional data have received significant attention of the computer vision and pattern recognition community in the recent years. Linear and nonlinear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), nonconvexity (for matrix factorization methods), and susceptibility to gross corruptions in the data. In this paper, we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient, and scalable for huge datasets with O(n log(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
The swarm intelligence paradigm has been used to optimise and control single UAVs: In @cite_0 , single vehicle autonomous path planning by learning from small number of examples.
{ "cite_N": [ "@cite_0" ], "mid": [ "1980969546" ], "abstract": [ "Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m s. Given a small set of human pilot demonstrations, we use recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
In @cite_8 , three-dimensional path planning for a single drone using a bat inspired algorithm to determine suitable points in space and applying B-spline curves to improve smoothness of the path.
{ "cite_N": [ "@cite_8" ], "mid": [ "2196839768" ], "abstract": [ "Abstract As a challenging high dimension optimization problem, three-dimensional path planning for Uninhabited Combat Air Vehicles (UCAV) mainly centralizes on optimizing the flight route with different types of constrains under complicated combating environments. An improved version of Bat Algorithm (BA) in combination with a Differential Evolution (DE), namely IBA, is proposed to optimize the UCAV three-dimensional path planning problem for the first time. In IBA, DE is required to select the most suitable individual in the bat population. By connecting the selected nodes using the proposed IBA, a safe path is successfully obtained. In addition, B-Spline curves are employed to smoothen the path obtained further and make it practically more feasible for UCAV. The performance of IBA is compared to that of the basic BA on a 3-D UCAV path planning problem. The experimental results demonstrate that IBA is a better technique for UCAV three-dimensional path planning problems compared to the basic BA model." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
In @cite_35 , authors introduced and validated a decentralised architecture for search and rescue missions in ground based robot groups of different sizes. Considered limited communication ability with a command centre and employs distributed communication.
{ "cite_N": [ "@cite_35" ], "mid": [ "1994253176" ], "abstract": [ "Multi-robot systems (MRS) may be very useful on assisting humans in many distributed activities, especially for search and rescue (SaR) missions in hazardous scenarios. However, there is a lack of full distributed solutions, addressing the advantages and limitations along different aspects of team operation, like communication requirements or scalability. In this paper, the effects of distributed group configurations are studied and results are drawn from collective exploration and collective inspection tasks in SaR scenarios, with the aim of understanding the main tradeoffs, and distilling design guidelines of collective architectures. With this purpose, extensive simulation experiments of MRS in a SaR scenario were carried out." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
In @cite_1 , the authors achieved area coverage for surveillance in a FoD using visual relative localisation for keeping formation autonomously.
{ "cite_N": [ "@cite_1" ], "mid": [ "1993688584" ], "abstract": [ "An algorithm for autonomous deployment of groups of Micro Aerial Vehicles (MAVs) in the cooperative surveillance task is presented in this paper. The algorithm enables to find a proper distributions of all MAVs in surveillance locations together with feasible and collision free trajectories from their initial position. The solution of the MAV-group deployment satisfies motion constraints of MAVs, environment constraints (non-fly zones) and constraints imposed by a visual onboard relative localization. The onboard relative localization, which is used for stabilization of the group flying in a compact formation, acts as an enabling technique for utilization of MAVs in situations where an external local system is not available or lacks the sufficient precision." ] }
1708.05894
2745765770
Sepsis is a poorly understood and potentially life-threatening complication that can occur as a result of infection. Early detection and treatment improves patient outcomes, and as such it poses an important challenge in medicine. In this work, we develop a flexible classifier that leverages streaming lab results, vitals, and medications to predict sepsis before it occurs. We model patient clinical time series with multi-output Gaussian processes, maintaining uncertainty about the physiological state of a patient while also imputing missing values. The mean function takes into account the effects of medications administered on the trajectories of the physiological variables. Latent function values from the Gaussian process are then fed into a deep recurrent neural network to classify patient encounters as septic or not, and the overall model is trained end-to-end using back-propagation. We train and validate our model on a large dataset of 18 months of heterogeneous inpatient stays from the Duke University Health System, and develop a new "real-time" validation scheme for simulating the performance of our model as it will actually be used. Our proposed method substantially outperforms clinical baselines, and improves on a previous related model for detecting sepsis. Our model's predictions will be displayed in a real-time analytics dashboard to be used by a sepsis rapid response team to help detect and improve treatment of sepsis.
There are many previously published early warning scores for predicting clinical deterioration or other related outcomes. For instance, the NEWS score ( @cite_13 ) and MEWS score ( @cite_25 ) are two of the more common scores used to assess overall deterioration. The SIRS score for systemic inflammatory response syndrome was commonly used to screen for sepsis in the past ( @cite_10 ), although it has been phased out by other scores designed for sepsis such as SOFA ( @cite_26 ) and qSOFA ( @cite_6 ) in recent years.
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_6", "@cite_10", "@cite_25" ], "mid": [ "2150979970", "1898928487", "1993397663", "2768146862", "2014224402" ], "abstract": [ "Abstract Introduction Early warning scores (EWS) are recommended as part of the early recognition and response to patient deterioration. The Royal College of Physicians recommends the use of a National Early Warning Score (NEWS) for the routine clinical assessment of all adult patients. Methods We tested the ability of NEWS to discriminate patients at risk of cardiac arrest, unanticipated intensive care unit (ICU) admission or death within 24h of a NEWS value and compared its performance to that of 33 other EWSs currently in use, using the area under the receiver-operating characteristic (AUROC) curve and a large vital signs database ( n =198,755 observation sets) collected from 35,585 consecutive, completed acute medical admissions. Results The AUROCs (95 CI) for NEWS for cardiac arrest, unanticipated ICU admission, death, and any of the outcomes, all within 24h, were 0.722 (0.685–0.759), 0.857 (0.847–0.868), 0.894 (0.887–0.902), and 0.873 (0.866–0.879), respectively. Similarly, the ranges of AUROCs (95 CI) for the other 33 EWSs were 0.611 (0.568–0.654) to 0.710 (0.675–0.745) (cardiac arrest); 0.570 (0.553–0.568) to 0.827 (0.814–0.840) (unanticipated ICU admission); 0.813 (0.802–0.824) to 0.858 (0.849–0.867) (death); and 0.736 (0.727–0.745) to 0.834 (0.826–0.842) (any outcome). Conclusions NEWS has a greater ability to discriminate patients at risk of the combined outcome of cardiac arrest, unanticipated ICU admission or death within 24h of a NEWS value than 33 other EWSs.", "", "Objective:To determine the prevalence and impact on mortality of delays in initiation of effective antimicrobial therapy from initial onset of recurrent persistent hypotension of septic shock.Design:A retrospective cohort study performed between July 1989 and June 2004.Setting:Fourteen intensive car", "An American College of Chest Physicians Society of Critical Care Medicine Consensus Conference was held in Northbrook in August 1991 with the goal of agreeing on a set of definitions that could be applied to patients with sepsis and its sequelae. New definitions were offered for some terms, while others were discarded. Broad definitions of sepsis and the systemic inflammatory response syndrome were proposed, along with detailed physiologic parameters by which a patient may be categorized. Definitions for severe sepsis, septic shock, hypotension, and multiple organ dysfunction syndrome were also offered. The use of severity scoring methods when dealing with septic patients was recommended as an adjunctive tool to assess mortality. Appropriate methods and applications for the use and testing of new therapies were recommended. The use of these terms and techniques should assist clinicians and researchers who deal with sepsis and its sequelae.", "INTRODUCTIONThe Modified Early Warning Score (MEWS) is a simple, physiological score that may allow improvement in the quality and safety of management provided to surgical ward patients. The primary purpose is to prevent delay in intervention or transfer of critically ill patients. PATIENTS AND METHODSA total of 334 consecutive ward patients were prospectively studied. MEWS were recorded on all patients and the primary end-point was transfer to ITU or HDU. RESULTSFifty-seven (17 ) ward patients triggered the call-out algorithm by scoring four or more on MEWS. Emergency patients were more likely to trigger the system than elective patients. Sixteen (5 of the total) patients were admitted to the ITU or HDU. MEWS with a threshold of four or more was 75 sensitive and 83 specific for patients who required transfer to ITU or HDU. CONCLUSIONSThe MEWS in association with a call-out algorithm is a useful and appropriate risk-management tool that should be implemented for all surgical in-patients." ] }
1708.05688
2746260445
One of the most crucial issues in data mining is to model human behaviour in order to provide personalisation, adaptation and recommendation. This usually involves implicit or explicit knowledge, either by observing user interactions, or by asking users directly. But these sources of information are always subject to the volatility of human decisions, making utilised data uncertain to a particular extent. In this contribution, we elaborate on the impact of this human uncertainty when it comes to comparative assessments of different data mining approaches. In particular, we reveal two problems: (1) biasing effects on various metrics of model-based prediction and (2) the propagation of uncertainty and its thus induced error probabilities for algorithm rankings. For this purpose, we introduce a probabilistic view and prove the existence of those problems mathematically, as well as provide possible solution strategies. We exemplify our theory mainly in the context of recommender systems along with the metric RMSE as a prominent example of precision quality measures.
The central role of information systems led to a lot of research and produced a variety of techniques and approaches @cite_29 . Here, we focus especially on recommender systems which are comprehensively described in @cite_18 @cite_33 . For the comparative assessment, different metrics are used to determine the prediction accuracy, such as the root mean squared error (RMSE), the mean absolute error (MAE), the mean average precision (MAP) along with many others @cite_30 @cite_23 @cite_32 . These accuracy metrics are often criticised @cite_6 and various researchers suggest that human computer interaction should be taken more into account @cite_5 @cite_8 . With our contribution, we extend existing criticism by an additional aspect that has little discussed so far. Although we exemplify our methodology in accordance with the RMSE, the main results of this contribution can be easily adopted for alternative assessment metrics without substantial loss of generality, insofar they require for (uncertain) human input. [.25ex]
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_33", "@cite_8", "@cite_29", "@cite_32", "@cite_6", "@cite_23", "@cite_5" ], "mid": [ "1690919088", "1597703625", "", "", "1985476803", "", "2049670925", "", "2114538223" ], "abstract": [ "The explosive growth of e-commerce and online environments has made the issue of information search and selection increasingly serious; users are overloaded by options to consider and they may not have the time or knowledge to personally evaluate these options. Recommender systems have proven to be a valuable way for online users to cope with the information overload and have become one of the most powerful and popular tools in electronic commerce. Correspondingly, various techniques for recommendation generation have been proposed. During the last decade, many of them have also been successfully deployed in commercial environments. Recommender Systems Handbook, an edited volume, is a multi-disciplinary effort that involves world-wide experts from diverse fields, such as artificial intelligence, human computer interaction, information technology, data mining, statistics, adaptive user interfaces, decision support systems, marketing, and consumer behavior. Theoreticians and practitioners from these fields continually seek techniques for more efficient, cost-effective and accurate recommender systems. This handbook aims to impose a degree of order on this diversity, by presenting a coherent and unified repository of recommender systems major concepts, theories, methodologies, trends, challenges and applications. Extensive artificial applications, a variety of real-world applications, and detailed case studies are included. Recommender Systems Handbook illustrates how this technology can support the user in decision-making, planning and purchasing processes. It works for well known corporations such as Amazon, Google, Microsoft and AT&T. This handbook is suitable for researchers and advanced-level students in computer science as a reference.", "In this age of information overload, people use a variety of strategies to make choices about what to buy, how to spend their leisure time, and even whom to date. Recommender systems automate some of these strategies with the goal of providing affordable, personal, and high-quality recommendations. This book offers an overview of approaches to developing state-of-the-art recommender systems. The authors present current algorithmic approaches for generating personalized buying proposals, such as collaborative and content-based filtering, as well as more interactive and knowledge-based approaches. They also discuss how to measure the effectiveness of recommender systems and illustrate the methods with practical case studies. The final chapters cover emerging topics such as recommender systems in the social web and consumer buying behavior theory. Suitable for computer science researchers and students interested in getting an overview of the field, this book will also be useful for professionals looking for the right technology to build real-world recommender systems.", "", "", "Offline evaluations are the most common evaluation method for research paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating research paper recommender systems, in many settings.", "", "Recommender systems have shown great potential to help users find interesting and relevant items from within a large information space. Most research up to this point has focused on improving the accuracy of recommender systems. We believe that not only has this narrow focus been misguided, but has even been detrimental to the field. The recommendations that are most accurate according to the standard metrics are sometimes not the recommendations that are most useful to users. In this paper, we propose informal arguments that the recommender community should move beyond the conventional accuracy metrics and their associated experimental methodologies. We propose new user-centric directions for evaluating recommender systems.", "", "Recommender systems do not always generate good recommendations for users. In order to improve recommender quality, we argue that recommenders need a deeper understanding of users and their information seeking tasks. Human-Recommender Interaction (HRI) provides a framework and a methodology for understanding users, their tasks, and recommender algorithms using a common language. Further, by using an analytic process model, HRI becomes not only descriptive, but also constructive. It can help with the design and structure of a recommender system, and it can act as a bridge between user information seeking tasks and recommender algorithms." ] }
1708.05688
2746260445
One of the most crucial issues in data mining is to model human behaviour in order to provide personalisation, adaptation and recommendation. This usually involves implicit or explicit knowledge, either by observing user interactions, or by asking users directly. But these sources of information are always subject to the volatility of human decisions, making utilised data uncertain to a particular extent. In this contribution, we elaborate on the impact of this human uncertainty when it comes to comparative assessments of different data mining approaches. In particular, we reveal two problems: (1) biasing effects on various metrics of model-based prediction and (2) the propagation of uncertainty and its thus induced error probabilities for algorithm rankings. For this purpose, we introduce a probabilistic view and prove the existence of those problems mathematically, as well as provide possible solution strategies. We exemplify our theory mainly in the context of recommender systems along with the metric RMSE as a prominent example of precision quality measures.
Probabilistic modelling of human cognition processes is quite common to the field of computational neuroscience. In particular, aspects of human decision-making can be stated as problems of probabilistic inference @cite_7 (often referred to as Bayesian Brain'' paradigm). Besides external influential factors, the belief precision is influenced by biological factors like current activity of dopamin cells @cite_21 . In other words, human decisions can be seen as uncertain quantities by nature of the underlying cognition mechanisms. Recently, this idea has been adopted for various probabilistic approaches of neural coding @cite_16 . In parallel, many methods of predictive data mining employ probabilistic (e.g. Bayesian) models for approximating mechanisms of human decisions based on prior observations as training data. At the same time, common evaluation approaches still use non-random quality metrics and thus do not account for possible decision ranking errors in a natural way. As a consequence, we systematically tackle both, observed user responses and resulting quality of the evaluated predictor as random quantities. This allows us to elaborate on the impact of human uncertainty and provide solutions for a more differentiated and objective assessment of predictive models. [.25ex]
{ "cite_N": [ "@cite_21", "@cite_7", "@cite_16" ], "mid": [ "2093466600", "", "1490667783" ], "abstract": [ "This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behaviour. In particular, we consider prior beliefs that action minimises the Kullback-Leibler divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimises a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimising free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action – constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualises optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimisation, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution – that minimises free energy. This sensitivity corresponds to the precision of beliefs about behaviour, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behaviour entails a representation of confidence about outcomes that are under an agent's control.", "", "A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.After an overview of the mathematical concepts, including Bayes' theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Mapping from laser sensors is a well studied research area in Robotics; in the early studies, the map has been estimated in 2 dimensions @cite_14 , while, in recent years, the prevalent approach is to estimate it in 3D thanks to advances in algorithms, processing and sensors. Mapping can be pursued together with robot self-localization leading to Simultaneous Localization and Mapping systems; these algorithms do not focus on the mapping part, indeed they reconstruct a sparse point-based map of the environment, while in our case we aim at reconstructing a dense representation of it.
{ "cite_N": [ "@cite_14" ], "mid": [ "2125420246" ], "abstract": [ "Recently Rao-Blackwellized particle filters have been introduced as effective means to solve the simultaneous localization and mapping (SLAM) problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper we present adaptive techniques to reduce the number of particles in a Rao-Blackwellized particle filter for learning grid maps. We propose an approach to compute an accurate proposal distribution taking into account not only the movement of the robot but also the most recent observation. This drastically decrease the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out re-sampling operations which seriously reduces the problem of particle depletion. Experimental results carried out with mobile robots in large-scale indoor as well as in outdoor environments illustrate the advantages of our methods over previous approaches." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Some approaches estimate a 2.5D map of the environment by populating a grid on the ground plane with the corresponding cell heights @cite_24 . These maps are useful for robot navigation, but neglect most of the environment details. A more coherent representation of the scene is volumetric, i.e, the space is partitioned into small parts classified as , and, in some cases, , and the boundary between occupied and free space represents the 3D map. In laser-based mapping the most common volumetric representation is voxel-based due to its good trade-off between expressiveness and easiness of implementation @cite_6 ; the drawback of this representation is the large memory consumption, and, therefore its non-scalability. Many efforts have been directed to improve the scalability and accuracy of voxel based mapping. Ryde and Hu @cite_29 store only occupied voxels, while Dryanovski @cite_15 store both occupied and free voxels, in order to represent also the uncertainty of unknown space. The state-of-the-art system OctoMap @cite_32 , and its extension @cite_11 , are able to efficiently store large maps by including an octree indexing to add flexibility to the framework.
{ "cite_N": [ "@cite_29", "@cite_32", "@cite_6", "@cite_24", "@cite_15", "@cite_11" ], "mid": [ "2011654214", "2133844819", "66449235", "2131991591", "2119404509", "1538887534" ], "abstract": [ "Most current navigation algorithms in mobile robotics produce 2D maps from data provided by 2D sensors. In large part this is due to the availability of suitable 3D sensors and difficulties of managing the large amount of data supplied by 3D sensors. This paper presents a novel, multi-resolution algorithm that aligns 3D range data stored in occupied voxel lists so as to facilitate the construction of 3D maps. Multi-resolution occupied voxel lists (MROL) are voxel based data structures that efficiently represent 3D scan and map information. The process described in this research can align a sequence of scans to produce maps and localise a range sensor within a prior global map. An office environment (200 square metres) is mapped in 3D at 0.02 m resolution, resulting in a 200,000 voxel occupied voxel list. Global localisation within this map, with no prior pose estimate, is completed in 5 seconds on a 2 GHz processor. The MROL based sequential scan matching is compared to a standard iterative closest point (ICP) implementation with an error in the initial pose estimate of plus or minus 1 m and 90 degrees. MROL correctly scan matches 94 of scans to within 0.1 m as opposed to ICP with 30 within 0.1 m.", "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "", "The authors are prototyping a legged vehicle, the Ambler, for an exploratory mission on another planet, conceivably Mars, where it is to traverse uncharted areas and collect material samples. They describe how the rover can construct from range imagery a geometric terrain representation, i.e., elevation map that includes uncertainty, unknown areas, and local features. First, they present an algorithm for constructing an elevation map from a single range image. By virtue of working in spherical-polar space, the algorithm is independent of the desired map resolution and the orientation of the sensor, unlike algorithms that work in Cartesian space. Secondly, the authors present a two-stage matching technique (feature matching followed by iconic matching) that identifies the transformation T corresponding to the vehicle displacement between two viewing positions. Thirdly, to support legged locomotion over rough terrain, they describe methods for evaluating regions of the constructed elevation maps as footholds. >", "Advancing research into autonomous micro aerial vehicle navigation requires data structures capable of representing indoor and outdoor 3D environments. The vehicle must be able to update the map structure in real time using readings from range-finding sensors when mapping unknown areas; it must also be able to look up occupancy information from the map for the purposes of localization and path-planning. Mapping models that have been used for these tasks include voxel grids, multi-level surface maps, and octrees. In this paper, we suggest a new approach to 3D mapping using a multi-volume occupancy grid, or MVOG. MVOGs explicitly store information about both obstacles and free space. This allows us to correct previous potentially erroneous sensor readings by incrementally fusing in new positive or negative sensor information. In turn, this enables extracting more reliable probabilistic information about the occupancy of 3D space. MVOGs outperform existing probabilistic 3D mapping methods in terms of memory usage, due to the fact that observations are grouped together into continuous vertical volumes to save space. We describe the techniques required for mapping using MVOGs, and analyze their performance using indoor and outdoor experimental data.", "This paper presents an extension of the standard occupancy grid for 3D environment mapping. The presented approach adds a fusion process after the occupancy update which modifies the resolution of the grid cells in an incremental manner. Consequently, the proposed approach requires fewer grid cells for 3D representation in comparison to a standard occupancy grid. The resolution adaptation process is based on the occupancy probabilities of the grid cells and leads to the relaxation of the cubic grid cell assumption common to most 3D occupancy grids. The aim of this paper is to show the advantage of the proposed incremental fusion process which leads to the approximation of the 3D environment using rectangular cuboids. Evaluation on a large scale dataset and comparison to the state of the art shows that the proposed approach has faster access time for all occupied grid cells and requires a smaller number of cells for 3D environment representation." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Voxel-based approaches usually produce unappealing reconstructions, due to the voxelization of the space, and they need a very high resolution to capture fine details of the scene, trading off their efficiency. In Computer Vision community, different volumetric representations have been explored, in particular many algorithms adopt the 3D Delaunay triangulation @cite_18 @cite_22 @cite_2 @cite_4 . Delaunay triangulation is self-adaptive according to the density of the data, i.e., the points, without any indexing policy; moreover its structure is made up of tetraedra from which it is easy to extract a triangular mesh, widely used in the Computer Graphics community to accurately model objects. These algorithms are consistent with the visibility, i.e., they mark the tetrahedra as free space or occupied according to the camera-to-point rays, assuming that a tetrahedron is empty if one, or at least one, ray intersects them.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_2" ], "mid": [ "", "2165306775", "2089697667", "2211977492" ], "abstract": [ "", "Since the initial comparison of [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by [59], showing the results to compare more than favorably with the current state-of-the-art methods.", "In the recent years, a family of 2-manifold surface reconstruction methods from a sparse Structure-from-Motion points cloud based on 3D Delaunay triangulation was developed. This family consists of batch and incremental variations which include a step that remove visual artifacts. Although been necessary in the term of surface quality, this step is slow compared to the other parts of the algorithm and is not well suited to be used in an incremental manner. In this paper, we present two other methods for removing visual artifacts. They are evaluated and compared to the previous one in the incremental context where the need of new methods is the highest. Taken separately, they provide medium results, but used together they are as good as the old method in the terms of surface quality, and at the same time, processing time is almost three times smaller.", "Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we propose to use a Delaunay triangulation of Edge-Points, which are the 3D points corresponding to image edges. These points constrain the edges of the 3D Delaunay triangulation to real-world edges. Besides the use of the Edge-Points, a second contribution of this paper is the Inverse Cone Heuristic that preemptively avoids the creation of artifacts in the reconstructed manifold surface. We force the reconstruction of a manifold surface since it makes it possible to apply computer graphics or photometric refinement algorithms to the output mesh. We evaluated our approach on four real sequences of the public available KITTI dataset by comparing the incremental reconstruction against Velodyne measurements." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Among image-based dense photoconsistent algorithms, the mesh-based algorithm @cite_4 @cite_1 have been proven to estimate very accurate models and to be scalable in large-scale environments. They bootstrap form an initial mesh with a volumetric method such as @cite_22 or @cite_0 and they refine it by minimizing a photometric energy function defined over the images. The most relevant drawback happens when moving objects appear in the images: their pixels affect the refinement process leading to inaccurate results.
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_4", "@cite_22" ], "mid": [ "832925222", "2949497434", "2165306775", "2089697667" ], "abstract": [ "Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo (MVS) reconstruction, many existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with less textures. To address these limitations, this paper presents a detail-preserving and content-aware variational (DCV) MVS method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware @math -minimization algorithm by adaptively estimating the @math value and regularization parameters. Compared with conventional isotropic mesh smoothing approaches, the proposed method is much more promising in suppressing noise while preserving sharp features. Experimental results on benchmark data sets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than the state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse data sets in terms of both completeness and accuracy.", "In this paper we propose a new approach to incrementally initialize a manifold surface for automatic 3D reconstruction from images. More precisely we focus on the automatic initialization of a 3D mesh as close as possible to the final solution; indeed many approaches require a good initial solution for further refinement via multi-view stereo techniques. Our novel algorithm automatically estimates an initial manifold mesh for surface evolving multi-view stereo algorithms, where the manifold property needs to be enforced. It bootstraps from 3D points extracted via Structure from Motion, then iterates between a state-of-the-art manifold reconstruction step and a novel mesh sweeping algorithm that looks for new 3D points in the neighborhood of the reconstructed manifold to be added in the manifold reconstruction. The experimental results show quantitatively that the mesh sweeping improves the resolution and the accuracy of the manifold reconstruction, allowing a better convergence of state-of-the-art surface evolution multi-view stereo algorithms.", "Since the initial comparison of [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by [59], showing the results to compare more than favorably with the current state-of-the-art methods.", "In the recent years, a family of 2-manifold surface reconstruction methods from a sparse Structure-from-Motion points cloud based on 3D Delaunay triangulation was developed. This family consists of batch and incremental variations which include a step that remove visual artifacts. Although been necessary in the term of surface quality, this step is slow compared to the other parts of the algorithm and is not well suited to be used in an incremental manner. In this paper, we present two other methods for removing visual artifacts. They are evaluated and compared to the previous one in the incremental context where the need of new methods is the highest. Taken separately, they provide medium results, but used together they are as good as the old method in the terms of surface quality, and at the same time, processing time is almost three times smaller." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
In our paper, in order to filter out moving objects from the lidar data and the images, we need to explicitly detect them. A laser-based moving objects detection algorithm has been proposed by Petrovskaya and Thrun @cite_35 to detect a moving vehicles using model-based vehicle fitting algorithm; the method performs well, but it needs models for the objects. Xiao @cite_33 and the Vallet @cite_17 model the physical scanning mechanism of lidar using Dempster-Shafer Theory (DST), evaluating the occupancy of a scan and comparing the consistency among scans. A further improvement of these algorithms has been proposed by Postica @cite_9 where the authors include an image-based validation step which sorts out many false positive. Pure image-based moving objects detection has been investigated in static camera videos (see @cite_13 ), also for the jittering case @cite_31 , however it is still a very open problem when dealing with moving cameras.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_9", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2044164936", "2113859792", "2527478282", "2087429945", "2071860582", "2011935419" ], "abstract": [ "Situational awareness is crucial for autonomous driving in urban environments. This paper describes the moving vehicle detection and tracking module that we developed for our autonomous driving robot Junior. The robot won second place in the Urban Grand Challenge, an autonomous driving race organized by the U.S. Government in 2007. The module provides reliable detection and tracking of moving vehicles from a high-speed moving platform using laser range finders. Our approach models both dynamic and geometric properties of the tracked vehicles and estimates them using a single Bayes filter per vehicle. We present the notion of motion evidence, which allows us to overcome the low signal-to-noise ratio that arises during rapid detection of moving vehicles in noisy urban environments. Furthermore, we show how to build consistent and efficient 2D representations out of 3D range data and how to detect poorly visible black vehicles. Experimental validation includes the most challenging conditions presented at the Urban Grand Challenge as well as other urban settings.", "Thanks to the development of Mobile mapping systems (MMS), street object recognition, classification, modelling and related studies have become hot topics recently. There has been increasing interest in detecting changes between mobile laser scanning (MLS) point clouds in complex urban areas. A method based on the consistency between the occupancies of space computed from different datasets is proposed. First occupancy of scan rays (empty, occupied, unknown) are defined while considering the accuracy of measurement and registration. Then the occupancy of scan rays are fused using the Weighted Dempster‐Shafer theory (WDST). Finally, the consistency between different datasets is obtained by comparing the occupancy at points from one dataset with the fused occupancy of neighbouring rays from the other dataset. Change detection results are compared with a conventional point to triangle (PTT) distance method. Changes at point level are detected fully automatically. The proposed approach allows to detect changes at large scales in urban scenes with fine detail and more importantly, distinguish real changes from occlusions.", "Detecting moving objects in dynamic scenes from sequences of lidar scans is an important task in object tracking, mapping, localization, and navigation. Many works focus on changes detection in previously observed scenes, while a very limited amount of literature addresses moving objects detection. The state-of-the-art method exploits Dempster-Shafer Theory to evaluate the occupancy of a lidar scan and to discriminate points belonging to the static scene from moving ones. In this paper we improve both speed and accuracy of this method by discretizing the occupancy representation, and by removing false positives through visual cues. Many false positives lying on the ground plane are also removed thanks to a novel ground plane removal algorithm. Efficiency is improved through an octree indexing strategy. Experimental evaluation against the KITTI public dataset shows the effectiveness of our approach, both qualitatively and quantitatively with respect to the state- of-the-art.", "Background subtraction is the classical approach to differentiate moving objects in a scene from the static background when the camera is fixed. If the fixed camera assumption does not hold, a frame registration step is followed by the background subtraction. However, this registration step cannot perfectly compensate camera motion, thus errors like translations of pixels from their true registered position occur. In this paper, we overcome these errors with a simple, but effective background subtraction algorithm that combines Temporal and Spatio-Temporal approaches. The former models the temporal intensity distribution of each individual pixel. The latter classifies foreground and background pixels, taking into account the intensity distribution of each pixels' neighborhood. The experimental results show that our algorithm outperforms the state-of-the-art systems in the presence of jitter, in spite of its simplicity.", "Abstract Background subtraction (BS) is a crucial step in many computer vision systems, as it is first applied to detect moving objects within a video stream. Many algorithms have been designed to segment the foreground objects from the background of a sequence. In this article, we propose to use the BMC (Background Models Challenge) dataset, and to compare the 29 methods implemented in the BGSLibrary. From this large set of various BG methods, we have conducted a relevant experimental analysis to evaluate both their robustness and their practical performance in terms of processor memory requirements.", "Abstract. This paper presents a full pipeline to extract mobile objects in images based on a simultaneous laser acquisition with a Velodyne scanner. The point cloud is first analysed to extract mobile objects in 3D. This is done using Dempster-Shafer theory and it results in weights telling for each points if it corresponds to a mobile object, a fixed object or if no decision can be made based on the data (unknown). These weights are projected in an image acquired simultaneously and used to segment the image between the mobile and the static part of the scene." ] }
1708.05582
2749333275
This paper presents models for detecting agreement disagreement in online discussions. In this work we show that by using a Siamese inspired architecture to encode the discussions, we no longer need to rely on hand-crafted features to exploit the meta thread structure. We evaluate our model on existing online discussion corpora - ABCD, IAC and AWTP. Experimental results on ABCD dataset show that by fusing lexical and word embedding features, our model achieves the state of the art performance of 0.804 average F1 score. We also show that the model trained on ABCD dataset performs competitively on relatively smaller annotated datasets (IAC and AWTP).
Previous work in this field focused a lot on spoken dialogues. @cite_13 @cite_27 @cite_16 used spurt level agreement annotations from the ICSI corpus @cite_8 . @cite_30 presents detection of agreements in multi-party conversations using the AMI meeting corpus @cite_28 . @cite_31 presents a conditional random field based approach for detecting agreement disagreement between speakers in English broadcast conversations
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_28", "@cite_27", "@cite_31", "@cite_16", "@cite_13" ], "mid": [ "2060893670", "1591607137", "1569447338", "", "2161354826", "", "2056818712" ], "abstract": [ "This paper presents a system for the automatic detection of agreements in multi-party conversations. We investigate various types of features that are useful for identifying agreements, including lexical, prosodic, and structural features. This system is implemented using supervised machine learning techniques and yields competitive results: Accuracy of 98.1 and a kappa value of 0.4. We also begin to explore the novel task of detecting the addressee of agreements (which speaker is being agreed with). Our system for this task achieves an accuracy of 80.3 , a 56 improvement over the baseline.", "We have collected a corpus of data from natural meetings that occurred at the International Computer Science Institute (ICSI) in Berkeley, California over the last three years. The corpus contains audio recorded simultaneously from head-worn and table-top microphones, word-level transcripts of meetings, and various metadata on participants, meetings, and hardware. Such a corpus supports work in automatic speech recognition, noise robustness, dialog modeling, prosody, rich transcription, information retrieval, and more. We present details on the contents of the corpus, as well as rationales for the decisions that led to its configuration. The corpus were delivered to the Linguistic Data Consortium (LDC).", "To support multi-disciplinary research in the AMI (Augmented Multi-party Interaction) project, a 100 hour corpus of meetings is being collected. This corpus is being recorded in several instrumented rooms equipped with a variety of microphones, video cameras, electronic pens, presentation slide capture and white-board capture devices. As well as real meetings, the corpus contains a significant proportion of scenario-driven meetings, which have been designed to elicit a rich range of realistic behaviors. To facilitate research, the raw data are being annotated at a number of levels including speech transcriptions, dialogue acts and summaries. The corpus is being distributed using a web server designed to allow convenient browsing and download of multimedia content and associated annotations. This article first overviews AMI research themes, then discusses corpus design, as well as data collection, annotation and distribution.", "", "We present Conditional Random Fields based approaches for detecting agreement disagreement between speakers in English broadcast conversation shows. We develop annotation approaches for a variety of linguistic phenomena. Various lexical, structural, durational, and prosodic features are explored. We compare the performance when using features extracted from automatically generated annotations against that when using human annotations. We investigate the efficacy of adding prosodic features on top of lexical, structural, and durational features. Since the training data is highly imbalanced, we explore two sampling approaches, random downsampling and ensemble downsampling. Overall, our approach achieves 79.2 (precision), 50.5 (recall), 61.7 (F1) for agreement detection and 69.2 (precision), 46.9 (recall), and 55.9 (F1) for disagreement detection, on the English broadcast conversation data.", "", "To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45 WER, the system recovers nearly 80 of agree disagree utterances with a confusion rate of only 3 ." ] }
1708.05582
2749333275
This paper presents models for detecting agreement disagreement in online discussions. In this work we show that by using a Siamese inspired architecture to encode the discussions, we no longer need to rely on hand-crafted features to exploit the meta thread structure. We evaluate our model on existing online discussion corpora - ABCD, IAC and AWTP. Experimental results on ABCD dataset show that by fusing lexical and word embedding features, our model achieves the state of the art performance of 0.804 average F1 score. We also show that the model trained on ABCD dataset performs competitively on relatively smaller annotated datasets (IAC and AWTP).
Recently, researchers have turned their attention towards (dis)agreement detection in online discussions. The prior work was geared towards performing 2-way classification of agreement disagreement. @cite_49 used various sentiment, emotional and durational features to detect local and global (dis)agreement in discussion forums. @cite_38 performed (dis)agreement on annotated posts from the Internet Argument Corpus (IAC) @cite_37 . They investigated various manual labelled features, which are however difficult to reproduce as they are not annotated in other datasets. To benchmark the results, we've also incorporated the IAC corpus in our experiments. Quite recently, @cite_6 proposed a 3-way classification by exploiting meta-thread structures and accommodation between participants. They also proposed a naturally occurring dataset ABCD (Agreement by Create Debaters) which was about 25 times larger than prior existing corpus. We've trained our classifier on this larger dataset. @cite_9 proposed (dis)agreement detection with an isotonic Conditional Random Fields (isotonic CRF) based sequential model. @cite_46 proposed features motivated by theoretical predictions to perform (dis)agreement detection. However, they've used hand-crafted patterns as features and these features miss few real world scenarios reducing the performance of the classifier.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_9", "@cite_6", "@cite_49", "@cite_46" ], "mid": [ "1518903672", "1733704070", "2250454469", "2251911493", "", "" ], "abstract": [ "The recent proliferation of political and social forums has given rise to a wealth of freely accessible naturalistic arguments. People can \"talk\" to anyone they want, at any time, in any location, about any topic. Here we use a Mechanical Turk annotated corpus of forum discussions as a gold standard for the recognition of disagreement in online ideological forums. We analyze the utility of meta-post features, contextual features, dependency features and word-based features for signaling the disagreement relation. We show that using contextual and dialogic features we can achieve accuracies up to 68 as compared to a unigram baseline of 63 .", "Deliberative, argumentative discourse is an important component of opinion formation, belief revision, and knowledge discovery; it is a cornerstone of modern civil society. Argumentation is productively studied in branches ranging from theoretical artificial intelligence to political rhetoric, but empirical analysis has suffered from a lack of freely available, unscripted argumentative dialogs. This paper presents the Internet Argument Corpus (IAC), a set of 390, 704 posts in 11, 800 discussions extracted from the online debate site 4forums.com. A 2866 thread 130, 206 post extract of the corpus has been manually sided for topic of discussion, and subsets of this topic-labeled extract have been annotated for several dialogic and argumentative markers: degrees of agreement with a previous post, cordiality, audiencedirection, combativeness, assertiveness, emotionality of argumentation, and sarcasm. As an application of this resource, the paper closes with a discussion of the relationship between discourse marker pragmatics, agreement, emotionality, and sarcasm in the IAC corpus.", "We study the problem of agreement and disagreement detection in online discussions. An isotonic Conditional Random Fields (isotonic CRF) based sequential model is proposed to make predictions on sentence- or segment-level. We automatically construct a socially-tuned lexicon that is bootstrapped from existing general-purpose sentiment lexicons to further improve the performance. We evaluate our agreement and disagreement tagging model on two disparate online discussion corpora -- Wikipedia Talk pages and online debates. Our model is shown to outperform the state-of-the-art approaches in both datasets. For example, the isotonic CRF model achieves F1 scores of 0.74 and 0.67 for agreement and disagreement detection, when a linear chain CRF obtains 0.58 and 0.56 for the discussions on Wikipedia Talk pages.", "Determining when conversational participants agree or disagree is instrumental for broader conversational analysis; it is necessary, for example, in deciding when a group has reached consensus. In this paper, we describe three main contributions. We show how different aspects of conversational structure can be used to detect agreement and disagreement in discussion forums. In particular, we exploit information about meta-thread structure and accommodation between participants. Second, we demonstrate the impact of the features using 3-way classification, including sentences expressing disagreement, agreement or neither. Finally, we show how to use a naturally occurring data set with labels derived from the sides that participants choose in debates on createdebate.com. The resulting new agreement corpus, Agreement by Create Debaters (ABCD) is 25 times larger than any prior corpus. We demonstrate that using this data enables us to outperform the same system trained on prior existing in-domain smaller annotated datasets.", "", "" ] }
1708.05582
2749333275
This paper presents models for detecting agreement disagreement in online discussions. In this work we show that by using a Siamese inspired architecture to encode the discussions, we no longer need to rely on hand-crafted features to exploit the meta thread structure. We evaluate our model on existing online discussion corpora - ABCD, IAC and AWTP. Experimental results on ABCD dataset show that by fusing lexical and word embedding features, our model achieves the state of the art performance of 0.804 average F1 score. We also show that the model trained on ABCD dataset performs competitively on relatively smaller annotated datasets (IAC and AWTP).
(Dis)agreement detection is related to other similar NLP tasks like stance detection and argument mining but is not exactly the same. Stance detection is the task of identifying whether the author of the text is in favor or against or neutral towards a target, while argument mining focuses on tasks like automatic extraction of arguments from free text, argument proposition classification and argumentative parsing @cite_24 @cite_3 . Recently there are studies on how people back up their stances when arguing where comments are classified as either attacking or supporting a set of pre-defined arguments @cite_39 . These tasks (stance detection, argument mining) are not independent but have some common features because of which they are benefited by common building blocks like sentiment detection, textual entailment and sentence similarity @cite_39 @cite_32 .
{ "cite_N": [ "@cite_24", "@cite_32", "@cite_3", "@cite_39" ], "mid": [ "1977155386", "2347127863", "", "2183367326" ], "abstract": [ "This paper provides the results of experiments on the detection of arguments in texts among which are legal texts. The detection is seen as a classification problem. A classifier is trained on a set of annotated arguments. Different feature sets are evaluated involving lexical, syntactic, semantic and discourse properties of the texts. The experiments are a first step in the context of automatically classifying arguments in legal texts according to their rhetorical type and their visualization for convenient access and search.", "We can often detect from a person’s utterances whether he or she is in favor of or against a given target entity—one’s stance toward the target. However, a person may express the same stance toward a target by using negative or positive language. Here for the first time we present a dataset of tweet–target pairs annotated for both stance and sentiment. The targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. Partitions of this dataset were used as training and test sets in a SemEval-2016 shared task competition. We propose a simple stance detection system that outperforms submissions from all 19 teams that participated in the shared task. Additionally, access to both stance and sentiment annotations allows us to explore several research questions. We show that although knowing the sentiment expressed by a tweet is beneficial for stance classification, it alone is not sufficient. Finally, we use additional unlabeled data through distant supervision techniques and word embeddings to further improve stance classification.", "", "In online discussions, users often back up their stance with arguments. Their arguments are often vague, implicit, and poorly worded, yet they provide valuable insights into reasons underpinning users’ opinions. In this paper, we make a first step towards argument-based opinion mining from online discussions and introduce a new task of argument recognition. We match usercreated comments to a set of predefined topic-based arguments, which can be either attacked or supported in the comment. We present a manually-annotated corpus for argument recognition in online discussions. We describe a supervised model based on comment-argument similarity and entailment features. Depending on problem formulation, model performance ranges from 70.5 to 81.8 F1-score, and decreases only marginally when applied to an unseen topic." ] }
1708.05587
2749420821
We study models of weighted exponential random graphs in the large network limit. These models have recently been proposed to model weighted network data arising from a host of applications including socio-econometric data such as migration flows and neuroscience desmarais2012statistical . Analogous to fundamental results derived for standard (unweighted) exponential random graph models in the work of Chatterjee and Diaconis, we derive limiting results for the structure of these models as @math , complementing the results in the work of yin2016phase,demuse2017phase in the context of finitely supported base measures. We also derive sufficient conditions for continuity of functionals in the specification of the model including conditions on nodal covariates. Finally we include a number of open problems to spur further understanding of this model especially in the context of applications.
Weighted exponential random graph models were theoretically analyzed in @cite_14 when the base measure is supported on a bounded interval and in @cite_18 the authors analyzed the phase transition phenomenon for a class of base measures supported on @math . In @cite_14 the no-phase transition" result for standard normal base measure was proved for directed edge-two-star model. Motivated by applications @cite_6 we extend this work when the base measure is supported on the whole real line. We showed for general base measure the model does not suffer degeneracy in high-temperature" regime. Also, via an explicit calculation we have showed for standard normal distribution the undirected edge-two-star model does not admit a phase transition. Finally under certain assumptions we established continuity of homomorphism densities of node-weighted graphs in cut-metric. We have only begun an analysis of this model and for the sake of concreteness, after the general setting of the main result, explore the ramifications for a few base measures. Other examples of bases measures of relevance from applications including count data can be found in @cite_6 . It would be interesting to explore these specific models and rigorously understand degeneracy (or lack thereof) for various specifications motivated by domain applications.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_6" ], "mid": [ "2622403352", "", "2024818197" ], "abstract": [ "Conventionally used exponential random graphs cannot directly model weighted networks as the underlying probability space consists of simple graphs only. Since many substantively important networks are weighted, this limitation is especially problematic. We extend the existing exponential framework by proposing a generic common distribution for the edge weights. Minimal assumptions are placed on the distribution, that is, it is non-degenerate and supported on the unit interval. By doing so, we recognize the essential properties associated with near-degeneracy and universality in edge-weighted exponential random graphs.", "", "Exponential-family random graph models (ERGMs) provide a principled and flexible way to model and simulate features common in social networks, such as propensities for homophily, mutuality, and friend-of-a-friend triad closure, through choice of model terms (sufficient statistics). However, those ERGMs modeling the more complex features have, to date, been limited to binary data: presence or absence of ties. Thus, analysis of valued networks, such as those where counts, measurements, or ranks are observed, has necessitated dichotomizing them, losing information and introducing biases. In this work, we generalize ERGMs to valued networks. Focusing on modeling counts, we formulate an ERGM for networks whose ties are counts and discuss issues that arise when moving beyond the binary case. We introduce model terms that generalize and model common social network features for such data and apply these methods to a network dataset whose values are counts of interactions." ] }
1708.05775
2750321818
FJRW theory is a formulation of physical Landau-Ginzburg models with a rich algebraic structure, rooted in enumerative geometry. As a consequence of a major physical conjecture, called the Landau-Ginzburg Calabi-Yau correspondence, several birational morphisms of Calabi-Yau orbifolds should correspond to isomorphisms in FJRW theory. In this paper it is shown that not only does this claim prove to be the case, but is a special case of a wider FJRW isomorphism theorem, which in turn allows for a proof of mirror symmetry for a new class of cases in the Landau-Ginzburg setting. We also obtain several interesting geometric applications regarding the Chen-Ruan cohomology of certain Calabi-Yau orbifolds.
Their result relies on the assumption that @math . In order to understand this restriction better, consider that there are two possible weight systems for @math yielding an elliptic curve and 44 possible weight systems for @math yielding a K3 surface with involution. Recall in this construction, we require our polynomials to be of the form . Only 48 of the 88 possible combinations of weight systems satisfy the gcd condition imposed in @cite_41 . In this article we generalize this result in two ways. We remove the restriction on gcd's, and we extend the construction to all dimensions.
{ "cite_N": [ "@cite_41" ], "mid": [ "1636544587" ], "abstract": [ "We prove that the Borcea-Voisin mirror pairs of Calabi-Yau three- folds admit projective birational models that satisfy the Berglund-Hubsch- Chiodo-Ruan transposition rule. This shows that the two mirror constructions provide the same mirror pairs, as soon as both can be defined." ] }
1708.05775
2750321818
FJRW theory is a formulation of physical Landau-Ginzburg models with a rich algebraic structure, rooted in enumerative geometry. As a consequence of a major physical conjecture, called the Landau-Ginzburg Calabi-Yau correspondence, several birational morphisms of Calabi-Yau orbifolds should correspond to isomorphisms in FJRW theory. In this paper it is shown that not only does this claim prove to be the case, but is a special case of a wider FJRW isomorphism theorem, which in turn allows for a proof of mirror symmetry for a new class of cases in the Landau-Ginzburg setting. We also obtain several interesting geometric applications regarding the Chen-Ruan cohomology of certain Calabi-Yau orbifolds.
In @cite_0 , the last author has considered exactly the form of mirror symmetry we propose here with the restriction that the defining polynomials must be Fermat type. In fact, he was able to show that for the mirror pairs we consider here, there is a mirror map relating the FJRW invariants of the A--model to the Picard--Fuchs equations of the B--model. In @cite_33 he also gave an LG CY correspondence relating the FJRW invariants of the pair @math to the corresponding Gromov--Witten invariants of the corresponding Borcea--Voisin orbifold. Although these results are broader in scope, the restriction to Fermat type polynomials is significant, reducing the number of weight systems from which one can select a K3 surface to 10 (from the 48 mentioned above). Furthermore, there is no general method of proof for a state space isomorphism provided there. However, we expect results regarding the FJRW invariants, Picard--Fuchs equations and GW invariants to hold in general, and the state space isomorphism we establish here is the first step to such results. This will be the topic of future work.
{ "cite_N": [ "@cite_0", "@cite_33" ], "mid": [ "2265209761", "2265209761" ], "abstract": [ "In the early 1990s, Borcea-Voisin orbifolds were some of the ear- liest examples of Calabi-Yau threefolds shown to exhibit mirror symmetry. However, their quantum theory has been poorly investigated. We study this in the context of the gauged linear sigma model, which in their case encom- passes Gromov-Witten theory and its three companions (FJRW theory and two mixed theories). For certain Borcea-Voisin orbifolds of Fermat type, we calculate all four genus zero theories explicitly. Furthermore, we relate the I-functions of these theories by analytic continuation and symplectic transfor- mation. In particular, the relation between the Gromov-Witten and FJRW theories can be viewed as an example of the Landau-Ginzburg Calabi-Yau correspondence for complete intersections of toric varieties.", "In the early 1990s, Borcea-Voisin orbifolds were some of the ear- liest examples of Calabi-Yau threefolds shown to exhibit mirror symmetry. However, their quantum theory has been poorly investigated. We study this in the context of the gauged linear sigma model, which in their case encom- passes Gromov-Witten theory and its three companions (FJRW theory and two mixed theories). For certain Borcea-Voisin orbifolds of Fermat type, we calculate all four genus zero theories explicitly. Furthermore, we relate the I-functions of these theories by analytic continuation and symplectic transfor- mation. In particular, the relation between the Gromov-Witten and FJRW theories can be viewed as an example of the Landau-Ginzburg Calabi-Yau correspondence for complete intersections of toric varieties." ] }
1708.05932
2750338112
In phase retrieval we want to recover an unknown signal @math from @math quadratic measurements of the form @math where @math are known sensing vectors and @math is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements @math needed to produce an estimator @math that is positively correlated with the signal @math ? We consider the case of Gaussian vectors @math . We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For @math no estimator can do significantly better than random and achieve a strictly positive correlation. For @math a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements @math produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.
The performance of the spectral methods for phase retrieval was first considered in @cite_8 . In the present notation, @cite_8 uses @math and proves that there exists a constant @math such that weak recovery can be achieved for @math . The same paper also gives an iterative procedure to improve over the spectral method, but the bottleneck is in the spectral step. The sample complexity of weak recovery using spectral methods was improved to @math in @cite_53 and then to @math in @cite_64 , for some constants @math and @math . Both of these papers also prove guarantees for exact recovery by suitable descent algorithms. The guarantees on the spectral initialization are proved by matrix concentration inequalities, a technique that typically does not return exact threshold values.
{ "cite_N": [ "@cite_53", "@cite_64", "@cite_8" ], "mid": [ "", "221278985", "2156719288" ], "abstract": [ "", "We consider the fundamental problem of solving quadratic systems of equations in @math variables, where @math , @math and @math is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach. There are several key distinguishing features, most notably, a distinct objective functional and novel update rules, which operate in an adaptive fashion and drop terms bearing too much influence on the search direction. These careful selection rules provide a tighter initial guess, better descent directions, and thus enhanced practical performance. On the theoretical side, we prove that for certain unstructured models of quadratic systems, our algorithms return the correct solution in linear time, i.e. in time proportional to reading the data @math and @math as soon as the ratio @math between the number of equations and unknowns exceeds a fixed numerical constant. We extend the theory to deal with noisy systems in which we only have @math and prove that our algorithms achieve a statistical accuracy, which is nearly un-improvable. We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size---hence the title of this paper. For instance, we demonstrate empirically that the computational cost of our algorithm is about four times that of solving a least-squares problem of the same size.", "Abstract A complex frame is a collection of vectors that span C M and define measurements, called intensity measurements, on vectors in C M . In purely mathematical terms, the problem of phase retrieval is to recover a complex vector from its intensity measurements, namely the modulus of its inner product with these frame vectors. We show that any vector is uniquely determined (up to a global phase factor) from 4 M − 4 generic measurements. To prove this, we identify the set of frames defining non-injective measurements with the projection of a real variety and bound its dimension." ] }
1708.05932
2750338112
In phase retrieval we want to recover an unknown signal @math from @math quadratic measurements of the form @math where @math are known sensing vectors and @math is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements @math needed to produce an estimator @math that is positively correlated with the signal @math ? We consider the case of Gaussian vectors @math . We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For @math no estimator can do significantly better than random and achieve a strictly positive correlation. For @math a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements @math produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.
In @cite_46 , the authors introduce the PhaseMax relaxation and prove an exact recovery result for phase retrieval, which depends on the correlation between the true signal and the initial estimate given to the algorithm. The same idea was independently proposed in @cite_23 . Furthermore, the analysis in @cite_23 allows to use the same set of measurements for both initialization and convex programming, whereas the analysis in @cite_46 requires fresh extra measurements for convex programming. By using our spectral method to obtain the initial estimate, it should be possible to improve the existing upper bounds on the number of samples needed for exact recovery.
{ "cite_N": [ "@cite_46", "@cite_23" ], "mid": [ "2543875952", "2533223407" ], "abstract": [ "We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax is a \"non-lifting\" relaxation that operates in the original signal dimension. We show that the dual problem to PhaseMax is Basis Pursuit, which implies that phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice.", "We propose a flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal. Therefore, we avoid the prohibitive computational cost associated with \"lifting\" and semidefinite programming (SDP) in methods such as PhaseLift and compete with recently developed non-convex techniques for phase retrieval. We relax the quadratic equations for phaseless measurements to inequality constraints each of which representing a symmetric \"slab\". Through a simple convex program, our proposed estimator finds an extreme point of the intersection of these slabs that is best aligned with a given anchor vector. We characterize geometric conditions that certify success of the proposed estimator. Furthermore, using classic results in statistical learning theory, we show that for random measurements the geometric certificates hold with high probability at an optimal sample complexity. Phase transition of our estimator is evaluated through simulations. Our numerical experiments also suggest that the proposed method can solve phase retrieval problems with coded diffraction measurements as well." ] }
1708.05932
2750338112
In phase retrieval we want to recover an unknown signal @math from @math quadratic measurements of the form @math where @math are known sensing vectors and @math is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements @math needed to produce an estimator @math that is positively correlated with the signal @math ? We consider the case of Gaussian vectors @math . We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For @math no estimator can do significantly better than random and achieve a strictly positive correlation. For @math a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements @math produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.
As previously mentioned, our analysis of spectral methods builds on the recent work of Lu and Li @cite_38 that compute the exact spectral threshold for a matrix of the form ) with @math . Here we generalize this result to signed pre-processing functions @math , and construct a function of this type that achieves the information-theoretic threshold for phase retrieval. Our proof indeed implies that non-negative pre-processing functions lead to an unavoidable gap with respect to the ideal threshold.
{ "cite_N": [ "@cite_38" ], "mid": [ "2593262341" ], "abstract": [ "We study a spectral initialization method that serves a key role in recent work on estimating signals in nonconvex settings. Previous analysis of this method focuses on the phase retrieval problem and provides only performance bounds. In this paper, we consider arbitrary generalized linear sensing models and present a precise asymptotic characterization of the performance of the method in the high-dimensional limit. Our analysis also reveals a phase transition phenomenon that depends on the ratio between the number of samples and the signal dimension. When the ratio is below a minimum threshold, the estimates given by the spectral method are no better than random guesses drawn from a uniform distribution on the hypersphere, thus carrying no information; above a maximum threshold, the estimates become increasingly aligned with the target signal. The computational complexity of the method, as measured by the spectral gap, is also markedly different in the two phases. Worked examples and numerical results are provided to illustrate and verify the analytical predictions. In particular, simulations show that our asymptotic formulas provide accurate predictions for the actual performance of the spectral method even at moderate signal dimensions." ] }
1708.05827
2746391148
We introduce a general framework for visual forecasting, which directly imitates visual sequences without additional supervision. As a result, our model can be applied at several semantic levels and does not require any domain knowledge or handcrafted features. We achieve this by formulating visual forecasting as an inverse reinforcement learning (IRL) problem, and directly imitate the dynamics in natural sequences from their raw pixel values. The key challenge is the high-dimensional and continuous state-action space that prohibits the application of previous IRL algorithms. We address this computational bottleneck by extending recent progress in model-free imitation with trainable deep feature representations, which (1) bypasses the exhaustive state-action pair visits in dynamic programming by using a dual formulation and (2) avoids explicit state sampling at gradient computation using a deep feature reparametrization. This allows us to apply IRL at scale and directly imitate the dynamics in high-dimensional continuous visual sequences from the raw pixel values. We evaluate our approach at three different level-of-abstraction, from low level pixels to higher level semantics: future frame generation, action anticipation, visual story forecasting. At all levels, our approach outperforms existing methods.
There has been growing interest in developing computational models of human activities that can extrapolate unseen information and predict future unobserved activities @cite_50 @cite_22 @cite_8 @cite_34 @cite_38 @cite_0 @cite_5 @cite_53 . Some of the existing approaches @cite_50 @cite_22 @cite_38 @cite_44 @cite_0 @cite_5 tried to generate realistic future frames using generative adversarial networks @cite_20 . Unlike these methods, we emphasize longer-term sequential dynamics in videos using inverse reinforcement learning. Other line of work attempted to infer the action or human trajectories that will occur in the subsequent time-step based on previous observation @cite_28 @cite_48 @cite_19 @cite_11 @cite_32 . Our model directly imitates the natural sequence from the pixel-level and assumes no domain knowledge.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_8", "@cite_28", "@cite_48", "@cite_53", "@cite_32", "@cite_0", "@cite_44", "@cite_19", "@cite_50", "@cite_5", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "2520707650", "", "2443846596", "2424778531", "2099320314", "2963737762", "2951242004", "", "2738136547", "", "2400532028", "2470475590", "2952453038", "2099471712", "2185953016" ], "abstract": [ "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "", "We presents a method for future localization: to predict plausible future trajectories of ego-motion in egocentric stereo images. Our paths avoid obstacles, move between objects, even turn around a corner into space behind objects. As a byproduct of the predicted trajectories, we discover the empty space occluded by foreground objects. One key innovation is the creation of an EgoRetinal map, akin to an illustrated tourist map, that 'rearranges' pixels taking into accounts depth information, the ground plane, and body motion direction, so that it allows motion planning and perception of objects on one image space. We learn to plan trajectories directly on this EgoRetinal map using first person experience of walking around in a variety of scenes. In a testing phase, given an novel scene, we find multiple hypotheses of future trajectories from the learned experience. We refine them by minimizing a cost function that describes compatibility between the obstacles in the EgoRetinal map and trajectories. We quantitatively evaluate our method to show predictive validity and apply to various real world daily activities including walking, shopping, and social interactions.", "Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model.", "Forecasting human activities from visual evidence is an emerging area of research which aims to allow computational systems to make predictions about unseen human actions. We explore the task of activity forecasting in the context of dual-agent interactions to understand how the actions of one person can be used to predict the actions of another. We model dual-agent interactions as an optimal control problem, where the actions of the initiating agent induce a cost topology over the space of reactive poses – a space in which the reactive agent plans an optimal pose trajectory. The technique developed in this work employs a kernel-based reinforcement learning approximation of the soft maximum value function to deal with the high-dimensional nature of human motion and applies a mean-shift procedure over a continuous cost function to infer a smooth reaction sequence. Experimental results show that our proposed method is able to properly model human interactions in a high dimensional space of human poses. When compared to several baseline models, results show that our method is able to generate highly plausible simulations of human interaction.", "For survival, a living agent (e.g., human in Fig. 1(a)) must have the ability to assess risk (1) by temporally anticipating accidents before they occur (Fig. 1(b)), and (2) by spatially localizing risky regions (Fig. 1(c)) in the environment to move away from threats. In this paper, we take an agent-centric approach to study the accident anticipation and risky region localization tasks. We propose a novel soft-attention Recurrent Neural Network (RNN) which explicitly models both spatial and appearance-wise non-linear interaction between the agent triggering the event and another agent or static-region involved. In order to test our proposed method, we introduce the Epic Fail (EF) dataset consisting of 3000 viral videos capturing various accidents. In the experiments, we evaluate the risk assessment accuracy both in the temporal domain (accident anticipation) and spatial domain (risky region localization) on our EF dataset and the Street Accident (SA) dataset. Our method consistently outperforms other baselines on both datasets.", "Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.", "", "We learn models to generate the immediate future in video. This problem has two main challenges. Firstly, since the future is uncertain, models should be multi-modal, which can be difficult to learn. Secondly, since the future is similar to the past, models store low-level details, which complicates learning of high-level semantics. We propose a framework to tackle both of these challenges. We present a model that generates the future by transforming pixels in the past. Our approach explicitly disentangles the models memory from the prediction, which helps the model learn desirable invariances. Experiments suggest that this model can generate short videos of plausible futures. We believe predictive models have many applications in robotics, health-care, and video understanding.", "", "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.", "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We consider inferring the future actions of people from a still image or a short video clip. Predicting future actions before they are actually executed is a critical ingredient for enabling us to effectively interact with other humans on a daily basis. However, challenges are two fold: First, we need to capture the subtle details inherent in human movements that may imply a future action; second, predictions usually should be carried out as quickly as possible in the social world, when limited prior observations are available." ] }
1708.05827
2746391148
We introduce a general framework for visual forecasting, which directly imitates visual sequences without additional supervision. As a result, our model can be applied at several semantic levels and does not require any domain knowledge or handcrafted features. We achieve this by formulating visual forecasting as an inverse reinforcement learning (IRL) problem, and directly imitate the dynamics in natural sequences from their raw pixel values. The key challenge is the high-dimensional and continuous state-action space that prohibits the application of previous IRL algorithms. We address this computational bottleneck by extending recent progress in model-free imitation with trainable deep feature representations, which (1) bypasses the exhaustive state-action pair visits in dynamic programming by using a dual formulation and (2) avoids explicit state sampling at gradient computation using a deep feature reparametrization. This allows us to apply IRL at scale and directly imitate the dynamics in high-dimensional continuous visual sequences from the raw pixel values. We evaluate our approach at three different level-of-abstraction, from low level pixels to higher level semantics: future frame generation, action anticipation, visual story forecasting. At all levels, our approach outperforms existing methods.
Reinforcement learning (RL) achieves remarkable success in multiple domains ranging from robotics @cite_14 , computer vision @cite_45 @cite_13 @cite_3 and natural language processing @cite_23 @cite_41 . In the RL setting, the reward function that the agent aims to maximize is given as signal for training, where the goal is to learn a behavior that maximizes the expected reward. On the other hand, we work on the inverse reinforcement learning (IRL) problem, where the reward function must be discovered from demonstrated behavior @cite_51 @cite_27 @cite_2 . This is inspired by recent progress of IRL in computer vision @cite_12 @cite_48 @cite_19 @cite_43 @cite_37 @cite_35 . Nonetheless, these frameworks require heavy use of domain knowledge to construct the handcrafted features that are important to the task. Unlike these approaches, we aim to generalize IRL to natural sequential data without annotations.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_14", "@cite_41", "@cite_48", "@cite_3", "@cite_19", "@cite_27", "@cite_45", "@cite_43", "@cite_23", "@cite_2", "@cite_51", "@cite_13", "@cite_12" ], "mid": [ "2056120433", "", "2575705757", "2523469089", "2099320314", "2138068405", "", "2061562262", "2102179764", "", "2581637843", "2098774185", "2434014514", "2951527505", "2290104316" ], "abstract": [ "In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling. Our framework can be learned in a completely unsupervised manner from a large collection of videos. However, more importantly, because our approach models the prediction framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict visual appearances--how are appearances going to change with time. This yields a visual \"hallucination\" of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events, we also show that our approach is comparable to supervised methods for event prediction.", "", "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.", "As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "Forecasting human activities from visual evidence is an emerging area of research which aims to allow computational systems to make predictions about unseen human actions. We explore the task of activity forecasting in the context of dual-agent interactions to understand how the actions of one person can be used to predict the actions of another. We model dual-agent interactions as an optimal control problem, where the actions of the initiating agent induce a cost topology over the space of reactive poses – a space in which the reactive agent plans an optimal pose trajectory. The technique developed in this work employs a kernel-based reinforcement learning approximation of the soft maximum value function to deal with the high-dimensional nature of human motion and applies a mean-shift procedure over a continuous cost function to infer a smooth reaction sequence. Experimental results show that our proposed method is able to properly model human interactions in a high dimensional space of human poses. When compared to several baseline models, results show that our method is able to generate highly plausible simulations of human interaction.", "This work provides a framework for learning sequential attention in real-world visual object recognition, using an architecture of three processing stages. The first stage rejects irrelevant local descriptors based on an information theoretic saliency measure, providing candidates for foci of interest (FOI). The second stage investigates the information in the FOI using a codebook matcher and providing weak object hypotheses. The third stage integrates local information via shifts of attention, resulting in chains of descriptor-action pairs that characterize object discrimination. A Q-learner adapts then from explorative search and evaluative feedback from entropy decreases on the attention sequences, eventually prioritizing shifts that lead to a geometry of descriptor-action scanpaths that is highly discriminative with respect to object recognition. The methodology is successfully evaluated on indoors (COIL-20 database) and outdoors (TSG-20 database) imagery, demonstrating significant impact by learning, outperforming standard local descriptor based methods both in recognition accuracy and processing time.", "", "Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ...", "Recent years have seen the development of fast and accurate algorithms for detecting objects in images. However, as the size of the scene grows, so do the running-times of these algorithms. If a 128×102 pixel image requires 20 ms to process, searching for objects in a 1280×1024 image will take 2 s. This is unsuitable under real-time operating constraints: by the time a frame has been processed, the object may have moved. An analogous problem occurs when controlling robot camera that need to scan scenes in search of target objects. In this paper, we consider a method for improving the run-time of general-purpose object-detection algorithms. Our method is based on a model of visual search in humans, which schedules eye fixations to maximize the long-term information accrued about the location of the target of interest. The approach can be used to drive robot cameras that physically scan scenes or to improve the scanning speed for very large high resolution images. We consider the latter application in this work by simulating a “digital fovea” and sequentially placing it in various regions of an image in a way that maximizes the expected information gain. We evaluate the approach using the OpenCV version of the Viola-Jones face detector. After accounting for all computational overhead introduced by the fixation controller, the approach doubles the speed of the standard Viola-Jones detector at little cost in accuracy.", "", "", "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories.", "Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency." ] }
1708.05827
2746391148
We introduce a general framework for visual forecasting, which directly imitates visual sequences without additional supervision. As a result, our model can be applied at several semantic levels and does not require any domain knowledge or handcrafted features. We achieve this by formulating visual forecasting as an inverse reinforcement learning (IRL) problem, and directly imitate the dynamics in natural sequences from their raw pixel values. The key challenge is the high-dimensional and continuous state-action space that prohibits the application of previous IRL algorithms. We address this computational bottleneck by extending recent progress in model-free imitation with trainable deep feature representations, which (1) bypasses the exhaustive state-action pair visits in dynamic programming by using a dual formulation and (2) avoids explicit state sampling at gradient computation using a deep feature reparametrization. This allows us to apply IRL at scale and directly imitate the dynamics in high-dimensional continuous visual sequences from the raw pixel values. We evaluate our approach at three different level-of-abstraction, from low level pixels to higher level semantics: future frame generation, action anticipation, visual story forecasting. At all levels, our approach outperforms existing methods.
Our extension of generative adversarial imitation learning @cite_51 is related to recent progress in generative adversarial networks (GAN) @cite_20 . While there has been multiple works on applying GAN to image and video @cite_10 @cite_31 @cite_38 @cite_16 , we extend it to long-term prediction of natural visual sequences and directly imitate the high-dimensional continuous sequence.
{ "cite_N": [ "@cite_38", "@cite_51", "@cite_31", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2520707650", "2434014514", "2173520492", "2963567641", "", "2099471712" ], "abstract": [ "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1906.00588
2947756088
Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough, but also the uncertainty (i.e. risk, or confidence) of that prediction must be estimated. Standard NNs, which are most often used in such tasks, do not provide any such information. Existing approaches try to solve this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not perform as well as standard NNs. In this paper, a new framework called RIO is developed that makes it possible to estimate uncertainty in any pretrained standard NN. RIO models prediction residuals using Gaussian Process with a composite input output kernel. The residual prediction and I O kernel are theoretically motivated and the framework is evaluated in twelve real-world datasets. It is found to provide reliable estimates of the uncertainty, reduce the error of the point predictions, and scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient in building real-world applications of NNs.
There has been significant interest in combining NNs with probabilistic Bayesian models. An early approach was Bayesian Neural Networks, in which a prior distribution is defined on the weights and biases of a NN, and a posterior distribution is then inferred from the training data @cite_28 @cite_9 . Traditional variational inference techniques have been applied to the learning procedure of Bayesian NN, but with limited success @cite_30 @cite_21 @cite_1 . By using a more advanced variational inference method, new approximations for Bayesian NNs were achieved that provided similar performance as dropout NNs @cite_3 . However, the main drawbacks of Bayesian NNs remain: prohibitive computational cost and difficult implementation procedure compared to standard NNs.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_9", "@cite_21", "@cite_1", "@cite_3" ], "mid": [ "71499226", "2111051539", "1567512734", "2108677974", "2047229728", "2164411961" ], "abstract": [ "Bayesian treatments of learning in neural networks are typically based either on a local Gaussian approximation to a mode of the posterior weight distribution, or on Markov chain Monte Carlo simulations. A third approach, called ensemble learning, was introduced by Hinton and van Camp (1993). It aims to approximate the posterior distribution by minimizing the Kullback-Leibler divergence between the true posterior and a parametric approximating distribution. The original derivation of a deterministic algorithm relied on the use of a Gaussian approximating distribution with a diagonal covariance matrix and hence was unable to capture the posterior correlations between parameters. In this chapter we show how the ensemble learning approach can be extended to full-covariance Gaussian distributions while remaining computationally tractable. We also extend the framework to deal with hyperparameters, leading to a simple re-estimation procedure. One of the benefits of our approach is that it yields a strict lower bound on the marginal likelihood, in contrast to other approximate procedures.", "A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.", "From the Publisher: Artificial \"neural networks\" are now widely used as flexible models for regression classification applications, but questions remain regarding what these models mean, and how they can safely be used when training data is limited. Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the \"overfitting\" that can occur with traditional neural network learning methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. Use of these models in practice is made possible using Markov chain Monte Carlo techniques. Both the theoretical and computational aspects of this work are of wider statistical interest, as they contribute to a better understanding of how Bayesian methods can be applied to complex problems. Presupposing only the basic knowledge of probability and statistics, this book should be of interest to many researchers in statistics, engineering, and artificial intelligence. Software for Unix systems that implements the methods described is freely available over the Internet.", "Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.", "", "We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning." ] }
1906.00402
2947069245
In dealing with constrained multi-objective optimization problems (CMOPs), a key issue of multi-objective evolutionary algorithms (MOEAs) is to balance the convergence and diversity of working populations.
The push and pull search (PPS) framework was introduced by @cite_15 . Unlike other constraint handling mechanisms, the search process of PPS is divided into two different stages: the push search and the pull search, and follows a procedure of "push first and pull second", by which the working population is pushed toward the unconstrained PF without considering any constraints in the push stage, and a constraint handling mechanism is used to pull the working population to the constrained PF in the pull stage.
{ "cite_N": [ "@cite_15" ], "mid": [ "2963115819" ], "abstract": [ "Abstract This paper proposes a push and pull search (PPS) framework for solving constrained multi-objective optimization problems (CMOPs). To be more specific, the proposed PPS divides the search process into two different stages: push and pull search stages. In the push stage, a multi-objective evolutionary algorithm (MOEA) is used to explore the search space without considering any constraints, which can help to get across infeasible regions very quickly and to approach the unconstrained Pareto front. Furthermore, the landscape of CMOPs with constraints can be probed and estimated in the push stage, which can be utilized to conduct the parameter setting for the constraint-handling approaches to be applied in the pull stage. Then, a modified form of a constrained multi-objective evolutionary algorithm (CMOEA), with improved epsilon constraint-handling, is applied to pull the infeasible individuals achieved in the push stage to the feasible and non-dominated regions. To evaluate the performance regarding convergence and diversity, a set of benchmark CMOPs and a real-world optimization problem are used to test the proposed PPS (PPS-MOEA D) and state-of-the-art CMOEAs, including MOEA D-IEpsilon, MOEA D-Epsilon, MOEA D-CDP, MOEA D-SR, C-MOEA D and NSGA-II-CDP. The comprehensive experimental results show that the proposed PPS-MOEA D achieves significantly better performance than the other six CMOEAs on most of the tested problems, which indicates the superiority of the proposed PPS method for solving CMOPs." ] }
1906.00402
2947069245
In dealing with constrained multi-objective optimization problems (CMOPs), a key issue of multi-objective evolutionary algorithms (MOEAs) is to balance the convergence and diversity of working populations.
Employing a number of sub-populations to solve problems in a collaborative way @cite_26 is a widely used approach, which can help an algorithm balance its convergence and diversity. One of the most popular methods is the M2M population decomposition approach @cite_41 , which decomposes a multi-objective optimization problems into a number of simple multi-objective optimization subproblems in the initialization, then solves these sub-problems simultaneously in a coordinated manner. For this purpose, @math unit vectors @math in @math are chosen in the first octant of the objective space. Then @math is divided into @math subregions @math , where @math is where @math is the acute angle between @math and @math . Therefore, the population is decomposed into @math sub-populations, each sub-population searches for a different multi-objective subproblem. Subproblem @math is defined as:
{ "cite_N": [ "@cite_41", "@cite_26" ], "mid": [ "2058142975", "2774381817" ], "abstract": [ "This letter suggests an approach for decomposing a multiobjective optimization problem (MOP) into a set of simple multiobjective optimization subproblems. Using this approach, it proposes MOEA D-M2M, a new version of multiobjective optimization evolutionary algorithm-based decomposition. This proposed algorithm solves these subproblems in a collaborative way. Each subproblem has its own population and receives computational effort at each generation. In such a way, population diversity can be maintained, which is critical for solving some MOPs. Experimental studies have been conducted to compare MOEA D-M2M with classic MOEA D and NSGA-II. This letter argues that population diversity is more important than convergence in multiobjective evolutionary algorithms for dealing with some MOPs. It also explains why MOEA D-M2M performs better.", "Abstract This paper proposes a parallel hurricane optimization algorithm (PHOA) for solving economic emission load dispatch (EELD) problem in modern power systems. In PHOA, several sub-populations moving independently in the search space with the aim of simultaneously optimizing the problem objectives considering the local behavior between sub-populations. By this way, it is intended to search for the Pareto optimal solutions which are contrasting to the single optimal solution. The inherent characteristics of parallelization strategy can enhance the Pareto solutions and increase the convergence to reach the Pareto optimal solutions. Simulations are conducted on three test systems and comparisons with other optimization techniques that reported in the literature are demonstrated. The obtained results demonstrate the superiority of the proposed PHOA compared to other optimization techniques. Additional economic benefits with secure settings are fulfilled, while preserving all system constraints within their permissible limits. Added to that, two security indices are proposed from generation units and transmission lines. The highest security index from generation units reflects that the operating condition achieves more power reserve. In transmission lines, the highest security index means that the transmission lines operated beyond their congestion limits. For justification of the proposed security indices, the proposed solution methodology is employed to assure their benefits in terms of economical and environmental issues. The proposed algorithm improves the economic issue as well as enhances the power system operation in the technical point of view with acceptable levels of emissions. Moreover, design of experiments using the Taguchi approach is employed to calibrate the parameters of the algorithms. So, it can be considered as a promising alternative algorithm for solving problems in practical large-scale power systems." ] }
1906.00642
2947251021
As an important semi-supervised learning task, positive-unlabeled (PU) learning aims to learn a binary classifier only from positive and unlabeled data. In this article, we develop a novel PU learning framework, called discriminative adversarial networks, which contains two discriminative models represented by deep neural networks. One model @math predicts the conditional probability of the positive label for a given sample, which defines a Bayes classifier after training, and the other model @math distinguishes labeled positive data from those identified by @math . The two models are simultaneously trained in an adversarial way like generative adversarial networks, and the equilibrium can be achieved when the output of @math is close to the exact posterior probability of the positive class. In contrast with existing deep PU learning approaches, DAN does not require the class prior estimation, and its consistency can be proved under very general conditions. Numerical experiments demonstrate the effectiveness of the proposed framework.
An important idea of DAN is to approximate @math by matching @math and @math , which has in fact been investigated in literature (see, e.g., @cite_14 @cite_24 @cite_35 @cite_12 @cite_20 ). However, the direct approximation based on ) involves the probability density estimation and is difficult for high-dimensional applications. In @cite_12 @cite_20 , by modeling the ratio between @math and @math as a linear combination of basis functions, this problem is transformed into a quadratic programming problem. But the approximation results cannot meet the requirement for classification, and are only applicable to estimation of the class prior of @math . One main contribution of our approach compared to the previous works is that we find a general and effective way to optimize the model of @math by adversarial training.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_24", "@cite_12", "@cite_20" ], "mid": [ "2166944917", "2062291443", "2127883478", "2616198738", "2707018199" ], "abstract": [ "We develop and analyze M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a nonasymptotic variational characterization of f -divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations.", "A situation where training and test samples follow different input distributions is called covariate shift. Under covariate shift, standard learning methods such as maximum likelihood estimation are no longer consistent—weighted variants according to the ratio of test and training input densities are consistent. Therefore, accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate training and test input densities separately and then estimate the importance by taking the ratio of the estimated densities. However, this naive approach tends to perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that does not involve density estimation. Our method is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized. Furthermore, we give rigorous mathematical proofs for the convergence of the proposed algorithm. Simulations illustrate the usefulness of our approach.", "A common setting for novelty detection assumes that labeled examples from the nominal class are available, but that labeled examples of novelties are unavailable. The standard (inductive) approach is to declare novelties where the nominal density is low, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning time. We argue that novelty detection in this semi-supervised setting is naturally solved by a general reduction to a binary classification problem. In particular, a detector with a desired false positive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the inductive approach, semi-supervised novelty detection (SSND) yields detectors that are optimal (e.g., statistically consistent) regardless of the distribution on novelties. Therefore, in novelty detection, unlabeled data have a substantial impact on the theoretical properties of the decision rule. We validate the practical utility of SSND with an extensive experimental study. We also show that SSND provides distribution-free, learning-theoretic solutions to two well known problems in hypothesis testing. First, our results provide a general solution to the general two-sample problem, that is, the problem of determining whether two random samples arise from the same distribution. Second, a specialization of SSND coincides with the standard p-value approach to multiple testing under the so-called random effects model. Unlike standard rejection regions based on thresholded p-values, the general SSND framework allows for adaptation to arbitrary alternative distributions in multiple dimensions.", "In real-world classification problems, the class balance in the training dataset does not necessarily reflect that of the test dataset, which can cause significant estimation bias. If the class ratio of the test dataset is known, instance re-weighting or resampling allows systematical bias correction. However, learning the class ratio of the test dataset is challenging when no labeled data is available from the test domain. In this paper, we propose to estimate the class ratio in the test dataset by matching probability distributions of training and test input data. We demonstrate the utility of the proposed approach through experiments.", "" ] }
1906.00642
2947251021
As an important semi-supervised learning task, positive-unlabeled (PU) learning aims to learn a binary classifier only from positive and unlabeled data. In this article, we develop a novel PU learning framework, called discriminative adversarial networks, which contains two discriminative models represented by deep neural networks. One model @math predicts the conditional probability of the positive label for a given sample, which defines a Bayes classifier after training, and the other model @math distinguishes labeled positive data from those identified by @math . The two models are simultaneously trained in an adversarial way like generative adversarial networks, and the equilibrium can be achieved when the output of @math is close to the exact posterior probability of the positive class. In contrast with existing deep PU learning approaches, DAN does not require the class prior estimation, and its consistency can be proved under very general conditions. Numerical experiments demonstrate the effectiveness of the proposed framework.
It is also interesting to compare DAN to GenPU, a GAN based PU learning method @cite_23 , since they share the similar adversarial training architecture. In DAN, the discriminative model @math plays the role of the generative model in GAN by approximating positive data distribution in an implicit way, and can be efficiently trained together with @math . In contrast, GenPU is much more time-consuming and easily suffers from mode collapse as stated in @cite_23 due to that it contains three generators and two discriminators. (Notice that the penalty factor @math cannot be applied to GenPU for the probability densities of samples given by generators are unknown.) Furthermore, the consistency of the GenPU needs the assumptions that class prior is given and there is no overlapping between positive and negative data distributions, which are not necessary for DAN.
{ "cite_N": [ "@cite_23" ], "mid": [ "2949895856" ], "abstract": [ "In this work, we consider the task of classifying binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal reweighting strategy for U data, so that a decent decision boundary can be found. However, given limited P data, the conventional PU models tend to suffer from overfitting when adapted to very flexible deep neural networks. In contrast, we are the first to innovate a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GAN). Our generative positive-unlabeled (GenPU) framework incorporates an array of discriminators and generators that are endowed with different roles in simultaneously producing positive and negative realistic samples. We provide theoretical analysis to justify that, at equilibrium, GenPU is capable of recovering both positive and negative data distributions. Moreover, we show GenPU is generalizable and closely related to the semi-supervised classification. Given rather limited P data, experiments on both synthetic and real-world dataset demonstrate the effectiveness of our proposed framework. With infinite realistic and diverse sample streams generated from GenPU, a very flexible classifier can then be trained using deep neural networks." ] }
1906.00424
2947626630
Unilateral contracts, such as terms of service, play a substantial role in modern digital life. However, few users read these documents before accepting the terms within, as they are too long and the language too complicated. We propose the task of summarizing such legal documents in plain English, which would enable users to have a better understanding of the terms they are accepting. We propose an initial dataset of legal text snippets paired with summaries written in plain English. We verify the quality of these summaries manually and show that they involve heavy abstraction, compression, and simplification. Initial experiments show that unsupervised extractive summarization methods do not perform well on this task due to the level of abstraction and style differences. We conclude with a call for resource and technique development for simplification and style transfer for legal language.
The dataset we present summarizes contracts in plain English . While there is no precise definition of plain English, the general philosophy is to make a text readily accessible for as many English speakers as possible. @cite_21 @cite_20 . Guidelines for plain English often suggest a preference for words with Saxon etymologies rather than a Latin Romance etymologies, the use of short words, sentences, and paragraphs, etc. https: plainlanguage.gov guidelines @cite_20 @cite_18 . In this respect, the proposed task involves some level of , as we will discuss in . However, existing resources for text simplification target literacy reading levels @cite_29 or learners of English as a second language @cite_25 . Additionally, these models are trained using Wikipedia or news articles, which are quite different from legal documents. These systems are trained without access to sentence-aligned parallel corpora; they only require semantically similar texts @cite_8 @cite_30 @cite_3 . To the best of our knowledge, however, there is no existing dataset to facilitate the transfer of legal language to plain English.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_8", "@cite_29", "@cite_21", "@cite_3", "@cite_25", "@cite_20" ], "mid": [ "2962912551", "1507711477", "2963366196", "1746111881", "1982165862", "2963667126", "2109802560", "2120879767" ], "abstract": [ "Binary classifiers are employed as discriminators in GAN-based unsupervised style transfer models to ensure that transferred sentences are similar to sentences in the target domain. One difficulty with the binary discriminator is that error signal is sometimes insufficient to train the model to produce rich-structured language. In this paper, we propose a technique of using a target domain language model as the discriminator to provide richer, token-level feedback during the learning process. Because our language model scores sentences directly using a product of locally normalized probabilities, it offers more stable and more useful training signal to the generator. We train the generator to minimize the negative log likelihood (NLL) of generated sentences evaluated by a language model. By using continuous approximation of the discrete samples, our model can be trained using back-propagation in an end-to-end way. Moreover, we find empirically with a language model as a structured discriminator, it is possible to eliminate the adversarial training steps using negative samples, thus making training more stable. We compare our model with previous work using convolutional neural networks (CNNs) as discriminators and show our model outperforms them significantly in three tasks including word substitution decipherment, sentiment modification and related language translation.", "Abstract : Three readability formulas were recalculated to be more suitable for Navy use. The three formulas are the Automated Readability Index (ARI), Fog Count, and Flesch Reading Ease Formula. They were derived from test results of 531 Navy enlisted personnel enrolled in four technical training schools. Personnel were tested for their reading comprehension level according to the comprehension section of the Gates-McGinitie reading test. At the same time, they were tested for their comprehension of 18 passages taken from Rate Training Manuals. Scores on the reading test and training material passages allowed the calculation of the grade level of the passages. This scaled reading grade level is based on Navy personnel reading Navy training material and comprehending it.", "This paper focuses on style transfer on the basis of non-parallel text. This is an instance of a broad family of problems including machine translation, decipherment, and sentiment modification. The key challenge is to separate the content from other aspects such as style. We assume a shared latent content distribution across different text corpora, and propose a method that leverages refined alignment of latent representations to perform style transfer. The transferred sentences from one style should match example sentences from the other style as a population. We demonstrate the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.", "Simple Wikipedia has dominated simplification research in the past 5 years. In this opinion paper, we argue that focusing on Wikipedia limits simplification research. We back up our arguments with corpus analysis and by highlighting statements that other researchers have made in the simplification literature. We introduce a new simplification dataset that is a significant improvement over Simple Wikipedia, and present a novel quantitative-comparative approach to study the quality of simplification data resources.", "", "", "In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.", "We describe research carried out as part of a text summarisation project for the legal domain for which we use a new XML corpus of judgments of the UK House of Lords. These judgments represent a particularly important part of public discourse due to the role that precedents play in English law. We present experimental results using a range of features and machine learning techniques for the task of predicting the rhetorical status of sentences and for the task of selecting the most summary-worthy sentences from a document. Results for these components are encouraging as they achieve state-of-the-art accuracy using robust, automatically generated cue phrase information. Sample output from the system illustrates the potential of summarisation technology for legal information management systems and highlights the utility of our rhetorical annotation scheme as a model of legal discourse, which provides a clear means for structuring summaries and tailoring them to different types of users." ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
The classical inertial navigation literature is extensive (see the books @cite_3 @cite_27 @cite_18 @cite_15 , for example) but is mainly focused on navigation of large vehicles with relatively high quality inertial sensors. Even though the theory is solid and general, practice has shown that a lot of hand-tailoring of methods is needed to actually get working systems. Since we focus on navigation approaches using consumer-grade sensors in small mobile devices, the literature survey below concentrates on recent work in that area.
{ "cite_N": [ "@cite_27", "@cite_18", "@cite_3", "@cite_15" ], "mid": [ "1531532259", "1564768010", "1569116522", "1493051473" ], "abstract": [ "From the Publisher: \"Estimation with Applications to Tracking and Navigation treats the estimation of various quantities from inherently inaccurate remote observations. It explains state estimator design using a balanced combination of linear systems, probability, and statistics.\" \"The authors provide a review of the necessary background mathematical techniques and offer an overview of the basic concepts in estimation. They then provide detailed treatments of all the major issues in estimation with a focus on applying these techniques to real systems.\" \"Suitable for graduate engineering students and engineers working in remote sensors and tracking, Estimation with Applications to Tracking and Navigation provides expert coverage of this important area.\"--BOOK JACKET.", "Inertial navigation is widely used for the guidance of aircraft, missiles ships and land vehicles, as well as in a number of novel applications such as surveying underground pipelines in drilling operations. This book discusses the physical principles of inertial navigation, the associated growth of errors and their compensation. It draws current technological developments, provides an indication of potential future trends and covers a broad range of applications. New chapters on MEMS (microelectromechanical systems) technology and inertial system applications are included.", "Coordinate frames and transformations ordinary differential equations inertial measurement unit inertial navigation system system error dynamics stochastic processes and error models linear estimation INS initialization and alignment the global positioning system (GPS) geodetic application.", "This book offers a guide for avionics system engineers who want to compare the performance of the various types of inertial navigation systems. The author emphasizes systems used on or near the surface of the planet, but says the principles can be applied to craft in space or underwater with a little tinkering. Part of the material is adapted from the authors doctoral dissertation, but much is from his lecture notes for a one-semester graduate course in inertial navigation systems for students who were already adept in classical mechanics, kinematics, inertial instrument theory, and inertial platform mechanization. This book was first published in 1971 but no revision has been necessary so far because the earth's spin is being so much more stable than its magnetic field." ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
Besides SHS and VIO approaches, there are also pure inertial navigation approaches which estimate the full motion trajectory in 3D by using foot-mounted consumer-grade inertial sensors @cite_28 @cite_0 . With foot-mounted sensors the inertial navigation problem is considerably easier than in the general case since the drift can be constrained by using zero-velocity updates, which are detected on each step when the foot touches the ground and the sensor is stationary. However, automatic zero-velocity updates are not applicable for handheld or flying devices, and the approach is not suitable to large-scale consumer use since the current solutions do not work well when the movement happens without steps ( , in a trolley or escalator). In addition, the type of shoes and sensor placement in the foot may affect the robustness and accuracy of estimation. A prominent example in this class of approaches is the OpenShoe project @cite_0 @cite_2 , which actually uses several pairs of accelerometers and gyroscopes to estimate the step-by-step PDR.
{ "cite_N": [ "@cite_28", "@cite_0", "@cite_2" ], "mid": [ "2056427752", "2120824934", "" ], "abstract": [ "A navigation system that tracks the location of a person on foot is useful for finding and rescuing firefighters or other emergency first responders, or for location-aware computing, personal navigation assistance, mobile 3D audio, and mixed or augmented reality applications. One of the main obstacles to the real-world deployment of location-sensitive wearable computing, including mixed reality (MR), is that current position-tracking technologies require an instrumented, marked, or premapped environment. At InterSense, we've developed a system called NavShoe, which uses a new approach to position tracking based on inertial sensing. Our wireless inertial sensor is small enough to easily tuck into the shoelaces, and sufficiently low power to run all day on a small battery. Although it can't be used alone for precise registration of close-range objects, in outdoor applications augmenting distant objects, a user would barely notice the NavShoe's meter-level error combined with any error in the head's assumed location relative to the foot. NavShoe can greatly reduce the database search space for computer vision, making it much simpler and more robust. The NavShoe device provides not only robust approximate position, but also an extremely accurate orientation tracker on the foot.", "Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.", "" ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
On the more technical side, we apply iterative filtering methods in this paper. Kalman filters and smoothers (see, , @cite_19 for an excellent overview of non-linear filtering) are recursive estimation schemes and thus iterative already per definition. Iterated filtering often refers to local ( inner-loop') iterations (over a single sample period). They are used together with extended Kalman filtering as a kind of fixed-point iteration to work the extended Kalman update towards a better linearization point (see, , @cite_17 ). The resulting iterated extended Kalman filter and iterated linearized filter-smoother can provide better performance if the system non-linearities are suitable. We however, are interested in iterative re-linearization of the dynamics and passing information over the state history for extended periods. Thus we focus on so-called global ( outer-loop') schemes, which are based on iteratively re-running of the entire forward--backward pass in the filter smoother. These methods relate directly to other iterative global linearization schemes like the so-called Laplace approximation in statistics machine learning (see, , @cite_16 ) or Newton iteration based methods (see, , @cite_29 and references therein).
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_16", "@cite_17" ], "mid": [ "88520345", "2963402689", "2045656233", "1559536185" ], "abstract": [ "Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book's practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB GNU Octave source code is available for download at www.cambridge.org sarkka, promoting hands-on work with the methods.", "Maximum likelihood (ML) estimation using Newton’s method in nonlinear state space models (SSMs) is a challenging problem due to the analytical intractability of the log- likelihood and its gradient ...", "FUNDAMENTALS OF BAYESIAN INFERENCE Probability and Inference Single-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian Approaches Hierarchical Models FUNDAMENTALS OF BAYESIAN DATA ANALYSIS Model Checking Evaluating, Comparing, and Expanding Models Modeling Accounting for Data Collection Decision Analysis ADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional Approximations REGRESSION MODELS Introduction to Regression Models Hierarchical Linear Models Generalized Linear Models Models for Robust Inference Models for Missing Data NONLINEAR AND NONPARAMETRIC MODELS Parametric Nonlinear Models Basic Function Models Gaussian Process Models Finite Mixture Models Dirichlet Process Models APPENDICES A: Standard Probability Distributions B: Outline of Proofs of Asymptotic Theorems C: Computation in R and Stan Bibliographic Notes and Exercises appear at the end of each chapter.", "A hopper stores a quantity of dry, particulate animal feed and is partially closed at the bottom by a feed guide plate having downwardly curved sides and a central circular orifice. The orifice extends into an agitator chamber having generally conical wall surfaces and a flat circular floor. The conical walls of the agitator chamber include a discharge opening which leads to a delivery chute. A motor driven agitator blade having an upstanding feeder shaft extending into the hopper orifice is rotatably mounted above the floor of the agitator chamber. When the motor is energized rotation of the agitator blade, including the feeder shaft, causes feed to flow from the hopper orifice into the path of the agitator blade. The rotating blade fluidizes the feed particles and centrifugally deflects them in a circular path about the chamber walls so that the feed flows through the discharge opening down the delivery chute and into an eating bowl." ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
In this paper, we take a general INS approach, without assuming legged or otherwise constrained motion, and compensate the limitations of low quality IMUs by fusing them with GNSS position fixes, which may be potentially sparse and infrequent containing large gaps in signal reception. As mentioned, there are relatively few general INS approaches for consumer-grade devices. We build upon the recent work @cite_13 , which shows relatively good path estimation results by utilizing online learning of sensor biases and manually provided loop closures or position fixes. We improve their approach in the following two ways which greatly increase the practical applicability in certain use cases: we utilize automatic GNSS based position measurements, which do not require additional manoeuvres or cooperation from the user; and we apply iterative path reconstruction methods, which provide improved accuracy in the presence of long interruptions in GNSS signal reception.
{ "cite_N": [ "@cite_13" ], "mid": [ "2963887447" ], "abstract": [ "Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible. This paper shows that by careful crafting and accounting for the weak information in the sensor samples, smartphones are capable of pure inertial navigation. We present a probabilistic approach for orientation and use-case free inertial odometry, which is based on double-integrating rotated accelerations. The strength of the model is in learning additive and multiplicative IMU biases online. We are able to track the phone position, velocity, and pose in realtime and in a computationally lightweight fashion by solving the inference with an extended Kalman filter. The information fusion is completed with zero-velocity updates (if the phone remains stationary), altitude correction from barometric pressure readings (if available), and pseudo-updates constraining the momentary speed. We demonstrate our approach using an iPad and iPhone in several indoor dead-reckoning applications and in a measurement tool setup." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
The most fundamental choice during the design of both oversampling and undersampling algorithms for handling data imbalance is the question of defining the regions of interest: the areas in which either the new instances are to be placed, in case of oversampling, or from which the existing instances are to be removed, in case of undersampling. Besides the random approaches, probably the most prevalent paradigm for the oversampling are the neighborhood-based methods originating from Synthetic Minority Over-sampling Technique (SMOTE) @cite_42 . The regions of interest of SMOTE are located between any given minority observation and its closest minority neighbors: SMOTE synthesizes new instances by interpolating the observation and one of its, randomly selected, nearest neighbors.
{ "cite_N": [ "@cite_42" ], "mid": [ "2148143831" ], "abstract": [ "An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of \"normal\" examples with only a small percentage of \"abnormal\" or \"interesting\" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
Another family of methods that can be distinguished are the cluster-based undersampling algorithms, notably the methods proposed by Yen and Lee @cite_36 , which use clustering to select the most representative subset of data. Finally, as has been originally demonstrated by @cite_11 , undersampling algorithms are well-suited for forming classifier ensembles, an idea that was further extended in form of evolutionary undersampling @cite_26 and boosting @cite_0 .
{ "cite_N": [ "@cite_36", "@cite_26", "@cite_0", "@cite_11" ], "mid": [ "2128965734", "2119498311", "2735835382", "2104167780" ], "abstract": [ "For classification problem, the training data will significantly influence the classification accuracy. However, the data in real-world applications often are imbalanced class distribution, that is, most of the data are in majority class and little data are in minority class. In this case, if all the data are used to be the training data, the classifier tends to predict that most of the incoming data belongs to the majority class. Hence, it is important to select the suitable training data for classification in the imbalanced class distribution problem. In this paper, we propose cluster-based under-sampling approaches for selecting the representative data as training data to improve the classification accuracy for minority class and investigate the effect of under-sampling methods in the imbalanced class distribution environment. The experimental results show that our cluster-based under-sampling approaches outperform the other under-sampling techniques in the previous studies.", "Classification with imbalanced data-sets has become one of the most challenging problems in Data Mining. Being one class much more represented than the other produces undesirable effects in both the learning and classification processes, mainly regarding the minority class. Such a problem needs accurate tools to be undertaken; lately, ensembles of classifiers have emerged as a possible solution. Among ensemble proposals, the combination of Bagging and Boosting with preprocessing techniques has proved its ability to enhance the classification of the minority class. In this paper, we develop a new ensemble construction algorithm (EUSBoost) based on RUSBoost, one of the simplest and most accurate ensemble, which combines random undersampling with Boosting algorithm. Our methodology aims to improve the existing proposals enhancing the performance of the base classifiers by the usage of the evolutionary undersampling approach. Besides, we promote diversity favoring the usage of different subsets of majority class instances to train each base classifier. Centered on two-class highly imbalanced problems, we will prove, supported by the proper statistical analysis, that EUSBoost is able to outperform the state-of-the-art methods based on ensembles. We will also analyze its advantages using kappa-error diagrams, which we adapt to the imbalanced scenario.", "Abstract As one of the most challenging and attractive problems in the pattern recognition and machine intelligence field, imbalanced classification has received a large amount of research attention for many years. In binary classification tasks, one class usually tends to be underrepresented when it consists of far fewer patterns than the other class, which results in undesirable classification results, especially for the minority class. Several techniques, including resampling, boosting and cost-sensitive methods have been proposed to alleviate this problem. Recently, some ensemble methods that focus on combining individual techniques to obtain better performance have been observed to present better classification performance on the minority class. In this paper, we propose a novel ensemble framework called Adaptive Ensemble Undersampling-Boost for imbalanced learning. Our proposal combines the Ensemble of Undersampling (EUS) technique, Real Adaboost, cost-sensitive weight modification, and adaptive boundary decision strategy to build a hybrid algorithm. The superiority of our method over other state-of-the-art ensemble methods is demonstrated by experiments on 18 real world data sets with various data distributions and different imbalance ratios. Given the experimental results and further analysis, our proposal is proven to be a promising alternative that can be applied to various imbalanced classification domains.", "Undersampling is a popular method in dealing with class-imbalance problems, which uses only a subset of the majority class and thus is very efficient. The main deficiency is that many majority class examples are ignored. We propose two algorithms to overcome this deficiency. EasyEnsemble samples several subsets from the majority class, trains a learner using each of them, and combines the outputs of those learners. BalanceCascade trains the learners sequentially, where in each step, the majority class examples that are correctly classified by the current trained learners are removed from further consideration. Experimental results show that both methods have higher Area Under the ROC Curve, F-measure, and G-mean values than many existing class-imbalance learning methods. Moreover, they have approximately the same training time as that of undersampling when the same number of weak classifiers is used, which is significantly faster than other methods." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
Despite the abundance of different strategies of dealing with data imbalance, it often remains unclear under what conditions a given method is expected to guarantee a satisfactory performance. Furthermore, taking into the account the no free lunch theorem @cite_39 it is unreasonable to expect that any single method will be able to achieve a state-of-the-art performance on every provided dataset. Identifying the areas of applicability, conditions under which the method is expected to be more likely to achieve a good performance, is therefore desirable both from the point of view of a practitioner, who can use that information to narrow down the range of methods appropriate for a problem at hand, as well as a theoretician, who can use that insight in the process of developing novel methods.
{ "cite_N": [ "@cite_39" ], "mid": [ "2100483895" ], "abstract": [ "This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected OTS error than B as vice versa, for loss functions like zero-one loss. In particular, this is true if A is cross-validation and B is “anti-cross-validation” (choose the learning algorithm with largest cross-validation error). This paper ends with a discussion of the implications of these results for computational learning theory. It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small. Other implications for “membership queries” algorithms and “punting” algorithms are also discussed." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
In the context of the imbalanced data classification, one of the criteria that can influence the applicability of different resampling strategies are the characteristics of the minority class distribution. Napierała and Stefanowski @cite_10 proposed a method of categorization of different types of minority objects that capture these characteristics. Their approach uses a 5-neighborhood to identify the nearest neighbors of a given object, and afterwards assigns to it a category based on the proportion of neighbors from the same class: in case of 4 or 5 neighbors from the same class, in case of 2 to 3 neighbors, in case of 1 neighbor, and when there are no neighbors from the same class. The percentage of the minority objects from different categories can be then used to describe the character of the entire dataset: an example of datasets with a large proportion of different minority object types was presented in Figure . Note that the imbalance ratio of the dataset does not determine the type of the minority objects it consists of, which was demonstrated in the above example.
{ "cite_N": [ "@cite_10" ], "mid": [ "752888290" ], "abstract": [ "Many real-world applications reveal difficulties in learning classifiers from imbalanced data. Although several methods for improving classifiers have been introduced, the identification of conditions for the efficient use of the particular method is still an open research problem. It is also worth to study the nature of imbalanced data, characteristics of the minority class distribution and their influence on classification performance. However, current studies on imbalanced data difficulty factors have been mainly done with artificial datasets and their conclusions are not easily applicable to the real-world problems, also because the methods for their identification are not sufficiently developed. In our paper, we capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. First, we confirm their occurrence in real data by exploring multidimensional visualizations of selected datasets. Then, we introduce a method for an identification of these types of examples, which is based on analyzing a class distribution in a local neighbourhood of the considered example. Two ways of modeling this neighbourhood are presented: with k-nearest examples and with kernel functions. Experiments with artificial datasets show that these methods are able to re-discover simulated types of examples. Next contributions of this paper include carrying out a comprehensive experimental study with 26 real world imbalanced datasets, where (1) we identify new data characteristics basing on the analysis of types of minority examples; (2) we demonstrate that considering the results of this analysis allow to differentiate classification performance of popular classifiers and pre-processing methods and to evaluate their areas of competence. Finally, we highlight directions of exploiting the results of our analysis for developing new algorithms for learning classifiers and pre-processing methods." ] }
1906.00535
2947458343
There is a high demand for high-quality Non-Player Characters (NPCs) in video games. Hand-crafting their behavior is a labor intensive and error prone engineering process with limited controls exposed to the game designers. We propose to create such NPC behaviors interactively by training an agent in the target environment using imitation learning with a human in the loop. While traditional behavior cloning may fall short of achieving the desired performance, we show that interactivity can substantially improve it with a modest amount of human efforts. The model we train is a multi-resolution ensemble of Markov models, which can be used as is or can be further "compressed" into a more compact model for inference on consumer devices. We illustrate our approach on an example in OpenAI Gym, where a human can help to quickly train an agent with only a handful of interactive demonstrations. We also outline our experiments with NPC training for a first-person shooter game currently in development.
Using human demonstrations helps training artificial agents in many applications and in particular in video games @cite_18 , @cite_13 , @cite_19 . Off-policy human demonstrations are easier to use and are abundant in player telemetry data. Supervised behavior cloning, imitation learning (IL), apprenticeship learning (e.g., @cite_7 ) and generative adversarial imitation learning (GAIL) @cite_15 allow for the reproduction of a teacher style and achievement of a reasonable level of performance in the game environment. Unfortunately, an agent trained using IL is usually unable to effectively generalize to previously underexplored states or to extrapolate stylistic elements of the human player to new states.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_19", "@cite_15", "@cite_13" ], "mid": [ "2897406577", "2107258367", "136464742", "2963277051", "2964182681" ], "abstract": [ "We present an approach to learn and deploy human-like playtesting in computer games based on deep learning from player data. We are able to learn and predict the most \"human\" action in a given position through supervised learning on a convolutional neural network. Furthermore, we show how we can use the learned network to predict key metrics of new content — most notably the difficulty of levels. Our player data and empirical data come from Candy Crush Saga (CCS) and Candy Crush Soda Saga (CCSS). However, the method is general and well suited for many games, in particular where content creation is sequential. CCS and CCSS are non-deterministic match-3 puzzle games with multiple game modes spread over a few thousand levels, providing a diverse testbed for this technique. Compared to Monte Carlo Tree Search (MCTS) we show that this approach increases correlation with average level difficulty, giving more accurate predictions as well as requiring only a fraction of the computation time.", "Recently it has been shown that deep neural networks can learn to play Atari games by directly observing raw pixels of the playing area. We show how apprenticeship learning can be applied in this setting so that an agent can learn to perform a task (i.e. play a game) by observing the expert, without any explicitly provided knowledge of the game’s internal state or objectives.", "In the NeuroEvolving Robotic Operatives (NERO) video game, the player trains a team of virtual robots for combat against other players' teams. The virtual robots learn in real time through interacting with the player. Since NERO was originally released in June, 2005, it has been downloaded over 50,000 times, appeared on Slashdot, and won several honors. The real-time NeuroEvolution of Augmenting Topologies (rt-NEAT) method, which can evolve increasingly complex artificial neural networks in real time as a game is being played, drives the robots' learning, making possible this entirely new genre of video game. The live demo will show how agents in NERO adapt in real time as they interact with the player. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games.", "Consider learning a policy from example expert behavior, without interaction with the expert or access to a reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.", "In this work we describe a novel deep reinforcement learning architecture that allows multiple actions to be selected at every time-step in an efficient manner. Multi-action policies allow complex behaviours to be learnt that would otherwise be hard to achieve when using single action selection techniques. We use both imitation learning and temporal difference (TD) reinforcement learning (RL) to provide a 4x improvement in training time and 2.5x improvement in performance over single action selection TD RL. We demonstrate the capabilities of this network using a complex in-house 3D game. Mimicking the behavior of the expert teacher significantly improves world state exploration and allows the agents vision system to be trained more rapidly than TD RL alone. This initial training technique kick-starts TD learning and the agent quickly learns to surpass the capabilities of the expert." ] }
1906.00535
2947458343
There is a high demand for high-quality Non-Player Characters (NPCs) in video games. Hand-crafting their behavior is a labor intensive and error prone engineering process with limited controls exposed to the game designers. We propose to create such NPC behaviors interactively by training an agent in the target environment using imitation learning with a human in the loop. While traditional behavior cloning may fall short of achieving the desired performance, we show that interactivity can substantially improve it with a modest amount of human efforts. The model we train is a multi-resolution ensemble of Markov models, which can be used as is or can be further "compressed" into a more compact model for inference on consumer devices. We illustrate our approach on an example in OpenAI Gym, where a human can help to quickly train an agent with only a handful of interactive demonstrations. We also outline our experiments with NPC training for a first-person shooter game currently in development.
Direct inclusion of a human in the control loop can potentially alleviate the problem of limited generalization. Dataset Aggregation, DAGGER @cite_8 , allows for an effective way of doing that when a human provides consistent optimal input, which may not be realistic in many environments. Another way of such inclusion of online human input is shared autonomy, which is an active research area with multiple applications, e.g., @cite_21 , @cite_11 , etc. The shared autonomy approach @cite_10 naturally extends to policy blending @cite_16 and allows to train DQN agents cooperating with a human in complex environments effectively. The applications of including a human in the training loop to the fields of robotics and self-driving cars are too numerous to cover here, but they mostly address the optimality aspect of the target policy while here we also aim to preserve stylistic elements of organic human gameplay.
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "", "2897539521", "2105925198", "2786110872", "2068006087" ], "abstract": [ "", "Across many domains, interactive systems either make decisions for us autonomously or yield decision-making authority to us and play a supporting role. However, many settings, such as those in education or the workplace, benefit from sharing this autonomy between the user and the system, and thus from a system that adapts to them over time. In this paper, we pursue two primary research questions: (1) How do we design interfaces to share autonomy between the user and the system? (2) How does shared autonomy alter a user\"s perception of a system? We present SharedKeys, an interactive shared autonomy system for piano instruction that plays different video segments of a piece for students to emulate and practice. Underlying our approach to shared autonomy is a mixed-observability Markov decision process that estimates a user\"s desired autonomy level based on her performance and attentiveness. Pilot studies revealed that students sharing autonomy with the system learned more quickly and perceived the system as more intelligent.", "In shared control teleoperation, the robot assists the user in accomplishing the desired task, making teleoperation easier and more seamless. Rather than simply executing the user's input, which is hindered by the inadequacies of the interface, the robot attempts to predict the user's intent, and assists in accomplishing it. In this work, we are interested in the scientific underpinnings of assistance: we propose an intuitive formalism that captures assistance as policy blending, illustrate how some of the existing techniques for shared control instantiate it, and provide a principled analysis of its main components: prediction of user intent and its arbitration with the user input. We define the prediction problem, with foundations in inverse reinforcement learning, discuss simplifying assumptions that make it tractable, and test these on data from users teleoperating a robotic manipulator. We define the arbitration problem from a control-theoretic perspective, and turn our attention to what users consider good arbitration. We conduct a user study that analyzes the effect of different factors on the performance of assistance, indicating that arbitration should be contextual: it should depend on the robot's confidence in itself and in the user, and even the particulars of the user. Based on the study, we discuss challenges and opportunities that a robot sharing the control with the user might face: adaptation to the context and the user, legibility of behavior, and the closed loop between prediction and user behavior.", "In shared autonomy, user input is combined with semi-autonomous control to achieve a common goal. The goal is often unknown ex-ante, so prior work enables agents to infer the goal from user input and assist with the task. Such methods tend to assume some combination of knowledge of the dynamics of the environment, the user's policy given their goal, and the set of possible goals the user might target, which limits their application to real-world scenarios. We propose a deep reinforcement learning framework for model-free shared autonomy that lifts these assumptions. We use human-in-the-loop reinforcement learning with neural network function approximation to learn an end-to-end mapping from environmental observation and user input to agent action, with task reward as the only form of supervision. Controlled studies with users (n = 16) and synthetic pilots playing a video game and flying a real quadrotor demonstrate the ability of our algorithm to assist users with real-time control tasks in which the agent cannot directly access the user's private information through observations, but receives a reward signal and user input that both depend on the user's intent. The agent learns to assist the user without access to this private information, implicitly inferring it from the user's input. This allows the assisted user to complete the task more effectively than the user or an autonomous agent could on their own. This paper is a proof of concept that illustrates the potential for deep reinforcement learning to enable flexible and practical assistive systems.", "As robots begin to enter our homes and workplaces, they will have to deal with the devices and appliances that are already there. Unfortunately, devices that are easy for humans to operate often cause problems for robots [3]. In teleoperation settings, the lack of tactile feedback often makes manipulation of buttons and switches awkward and clumsy [7]. Also, the robot's gripper often occludes the control, making teleoperation difficult. In the autonomous setting, perception of small buttons and switches is often difficult due to sensor limitations and poor lighting conditions. Adding depth information does not help much, since many of the controls we want to manipulate are small, and often close to the noise threshold of currently-available depth sensors typically installed on a mobile robot. This makes it extremely difficult to segment the controls from the other parts of the device. In this paper, we present a shared autonomy approach to the operation of physical device controls. A human operator gives high-level guidance, helps identify controls and their locations, and sequences the actions of the robot. Autonomous software on our robot performs the lower-level actions that require closed-loop control, and estimates the exact positions and parameters of controls. We describe the overall system, and then give the results of our initial evaluations, which suggest that the system is effective in operating the controls on a physical device." ] }
1906.00580
2947218620
Language style transfer has attracted more and more attention in the past few years. Recent researches focus on improving neural models targeting at transferring from one style to the other with labeled data. However, transferring across multiple styles is often very useful in real-life applications. Previous researches of language style transfer have two main deficiencies: dependency on massive labeled data and neglect of mutual influence among different style transfer tasks. In this paper, we propose a multi-agent style transfer system (MAST) for addressing multiple style transfer tasks with limited labeled data, by leveraging abundant unlabeled data and the mutual benefit among the multiple styles. A style transfer agent in our system not only learns from unlabeled data by using techniques like denoising auto-encoder and back-translation, but also learns to cooperate with other style transfer agents in a self-organization manner. We conduct our experiments by simulating a set of real-world style transfer tasks with multiple versions of the Bible. Our model significantly outperforms the other competitive methods. Extensive results and analysis further verify the efficacy of our proposed system.
The need to leverage unlabeled data draws a lot of interests of NMT researchers. Researches like @cite_26 @cite_20 , @cite_17 , and @cite_24 propose methods to build semi-supervised or unsupervised models. However, these techniques mainly designed for NMT tasks, and they haven't been widely used for style transfer tasks. Some unsupervised approaches @cite_4 @cite_22 try addressing style transfer problems by using GAN @cite_8 . But their architecture shows drawbacks in content preservation @cite_6 . In this paper, we follow the ideas of Sennrich's work to propose a semi-supervised method for leveraging unlabeled data of both source side and target side.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_8", "@cite_6", "@cite_24", "@cite_20", "@cite_17" ], "mid": [ "2964013027", "2963366196", "2962912551", "2099471712", "2963034998", "2962824887", "2765961751", "2963216553" ], "abstract": [ "", "This paper focuses on style transfer on the basis of non-parallel text. This is an instance of a broad family of problems including machine translation, decipherment, and sentiment modification. The key challenge is to separate the content from other aspects such as style. We assume a shared latent content distribution across different text corpora, and propose a method that leverages refined alignment of latent representations to perform style transfer. The transferred sentences from one style should match example sentences from the other style as a population. We demonstrate the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.", "Binary classifiers are employed as discriminators in GAN-based unsupervised style transfer models to ensure that transferred sentences are similar to sentences in the target domain. One difficulty with the binary discriminator is that error signal is sometimes insufficient to train the model to produce rich-structured language. In this paper, we propose a technique of using a target domain language model as the discriminator to provide richer, token-level feedback during the learning process. Because our language model scores sentences directly using a product of locally normalized probabilities, it offers more stable and more useful training signal to the generator. We train the generator to minimize the negative log likelihood (NLL) of generated sentences evaluated by a language model. By using continuous approximation of the discrete samples, our model can be trained using back-propagation in an end-to-end way. Moreover, we find empirically with a language model as a structured discriminator, it is possible to eliminate the adversarial training steps using negative samples, thus making training more stable. We compare our model with previous work using convolutional neural networks (CNNs) as discriminators and show our model outperforms them significantly in three tasks including word substitution decipherment, sentiment modification and related language translation.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "", "In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively.", "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores up to 32.8, without using even a single parallel sentence at training time.", "" ] }
1906.00580
2947218620
Language style transfer has attracted more and more attention in the past few years. Recent researches focus on improving neural models targeting at transferring from one style to the other with labeled data. However, transferring across multiple styles is often very useful in real-life applications. Previous researches of language style transfer have two main deficiencies: dependency on massive labeled data and neglect of mutual influence among different style transfer tasks. In this paper, we propose a multi-agent style transfer system (MAST) for addressing multiple style transfer tasks with limited labeled data, by leveraging abundant unlabeled data and the mutual benefit among the multiple styles. A style transfer agent in our system not only learns from unlabeled data by using techniques like denoising auto-encoder and back-translation, but also learns to cooperate with other style transfer agents in a self-organization manner. We conduct our experiments by simulating a set of real-world style transfer tasks with multiple versions of the Bible. Our model significantly outperforms the other competitive methods. Extensive results and analysis further verify the efficacy of our proposed system.
The core inspiration for our proposed system comes from the idea of multi-agent system design. A P2P self-organization system @cite_11 have been successfully applied in practical security systems. They design policies for agents to choose useful neighbors to produce better predictions. It enlightens us to build style transfer systems. Researches on reinforcement learning in text generation tasks @cite_3 also show the practicability to regard text generation models as agents with a large action space.
{ "cite_N": [ "@cite_3", "@cite_11" ], "mid": [ "2964268978", "2102714889" ], "abstract": [ "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "Major trends in the development of modern information technologies are to a large extend determined by important practical problems that arise in economics, ecology, safety of society and individuals, and in other fields. Even though these problems seem to be quite different and the requirements for their software implementation are also different, they have many common features, which imply the most stringent requirements for modern information technologies. These features were analyzed in the first part of the present paper. That analysis showed that the new requirements for the model and software implementation of such problems are best met by the multiagent self-organizing system model. In this paper, we consider examples of using this model in various applications and describe their architectures and software implementation; in particular, multiagent self-organization models as applied for flood forecasting and planning and operational enterprise management are described. New capabilities of multiagent self-organizing systems are demonstrated using a self-learning system for detecting intrusions into computer networks as an example. Here, the problem of self-configuration of an overlay network is actually solved. The capabilities of a multiagent self-organizing system in large-scale control in real time are demonstrated using adaptive traffic control in large cities. For the software implementation of multiagent self-organizing systems, special development tools that are different from the existing ones are needed because the conventional top-down development paradigm is inappropriate for self-organizing architectures. The cause is that the global behavior of a multiagent self-organizing system emerges due to local interactions; therefore, it cannot be predicted in advance. For that reason, the bottom-up development model is more appropriate for such systems. In this paper, we give a brief review of the models and approaches proposed for this purpose. One of the promising approaches based on the use of the so-called self-organization design patterns is described in more detail. Results of using the multiagent self-organization model are discussed and prospects of its practical application are estimated." ] }
1906.00628
2946911432
We present an efficient technique, which allows to train classification networks which are verifiably robust against norm-bounded adversarial attacks. This framework is built upon the work of , who applies the interval arithmetic to bound the activations at each layer and keeps the prediction invariant to the input perturbation. While that method is faster than competitive approaches, it requires careful tuning of hyper-parameters and a large number of epochs to converge. To speed up and stabilize training, we supply the cost function with an additional term, which encourages the model to keep the interval bounds at hidden layers small. Experimental results demonstrate that we can achieve comparable (or even better) results using a smaller number of training iterations, in a more stable fashion. Moreover, the proposed model is not so sensitive to the exact specification of the training process, which makes it easier to use by practitioners.
To speed up the training of verifiably robust models, one can bound a set of activations reachable through a norm-bounded perturbation @cite_14 @cite_35 . In @cite_24 , linear programming was used to find the convex outer bound for ReLU networks. This approach was later extended to general non-ReLU neurons @cite_33 . As an alternative, @cite_18 @cite_20 @cite_3 adapted the framework of abstract transformers' to compute an approximation to the adversarial polytope using the SGD training. This allowed to train the networks on entire regions of the input space at once. Interval bound propagation @cite_17 applied the interval arithmetic to propagate axis-aligned bounding box from layer to layer. Analogical idea was used in @cite_26 , in which the predictor and the verifier networks were trained simultaneously. While these methods are computationally appealing, they require careful tuning of hyper-parameters to provide tight bounds on the verification network. Finally, there are also hybrid methods, which combine exact and relaxed verifiers @cite_28 @cite_38 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_26", "@cite_33", "@cite_38", "@cite_28", "@cite_3", "@cite_24", "@cite_20", "@cite_17" ], "mid": [ "", "2803850896", "2917875722", "2803392236", "2963424284", "", "2801079363", "", "2766462876", "", "2898963688" ], "abstract": [ "", "", "Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification. We further prove strong duality between the primal and dual problems under very mild conditions. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it.", "This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i.e., networks that provably satisfy some desired input-output properties. The key idea is to simultaneously train two networks: a predictor network that performs the task at hand,e.g., predicting labels given inputs, and a verifier network that computes a bound on how well the predictor satisfies the properties being verified. Both networks can be trained simultaneously to optimize a weighted combination of the standard data-fitting loss and a term that bounds the maximum violation of the property. Experiments show that not only is the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times (outperforming previous algorithms on small datasets like MNIST and SVHN), but it can also be scaled to produce the first known (to the best of our knowledge) verifiably robust networks for CIFAR-10.", "Finding minimum distortion of adversarial examples and thus certifying robustness in neural networks classifiers is known to be a challenging problem. Nevertheless, recently it has been shown to be possible to give a non-trivial certified lower bound of minimum distortion, and some recent progress has been made towards this direction by exploiting the piece-wise linear nature of ReLU activations. However, a generic robustness certification for activation functions still remains largely unexplored. To address this issue, in this paper we introduce CROWN, a general framework to certify robustness of neural networks with general activation functions. The novelty in our algorithm consists of bounding a given activation function with linear and quadratic functions, hence allowing it to tackle general activation functions including but not limited to the four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we facilitate the search for a tighter certified lower bound by selecting appropriate surrogates for each neuron activation. Experimental results show that CROWN on ReLU networks can notably improve the certified lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while having comparable computational efficiency. Furthermore, CROWN also demonstrates its effectiveness and flexibility on networks with general activation functions, including tanh, sigmoid and arctan. To the best of our knowledge, CROWN is the first framework that can efficiently certify non-trivial robustness for general activation functions in neural networks.", "", "The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. These methods are however still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we make two key contributions. First, we present a unified framework that encompasses previous methods. This analysis results in the identification of new methods that combine the strengths of multiple existing approaches, accomplishing a speedup of two orders of magnitude compared to the previous state of the art. Second, we propose a new data set of benchmarks which includes a collection of previously released testcases. We use the benchmark to provide the first experimental comparison of existing algorithms and identify the factors impacting the hardness of verification problems.", "", "We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations (on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well). The basic idea of the approach is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a toy 2D robust classification task, and on a simple convolutional architecture applied to MNIST, where we produce a classifier that provably has less than 8.4 test error for any adversarial attack with bounded @math norm less than @math . This represents the largest verified network that we are aware of, and we discuss future challenges in scaling the approach to much larger domains.", "", "Recent work has shown that it is possible to train deep neural networks that are verifiably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they remain hard to scale to larger networks. Through a comprehensive analysis, we show how a careful implementation of a simple bounding technique, interval bound propagation (IBP), can be exploited to train verifiably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and choice of hyper-parameters allows the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to obtain the first verifiably robust model on a downscaled version of ImageNet." ] }
1906.00423
2946912408
Consider a two-player zero-sum stochastic game where the transition function can be embedded in a given feature space. We propose a two-player Q-learning algorithm for approximating the Nash equilibrium strategy via sampling. The algorithm is shown to find an @math -optimal strategy using sample size linear to the number of features. To further improve its sample efficiency, we develop an accelerated algorithm by adopting techniques such as variance reduction, monotonicity preservation and two-sided strategy approximation. We prove that the algorithm is guaranteed to find an @math -optimal strategy using no more than @math samples with high probability, where @math is the number of features and @math is a discount factor. The sample, time and space complexities of the algorithm are independent of original dimensions of the game.
In the special case of MDP, there exist a large body of works on its sample complexity and sampling-based algorithms. For the tabular setting (finitely many state and actions), sample complexity of MDP with a sampling oracle has been studied in @cite_8 @cite_2 @cite_7 @cite_13 @cite_34 @cite_14 @cite_33 . Lower bounds for sample complexity have been studied in @cite_24 @cite_44 @cite_45 , where the first tight lower bound @math is obtained in @cite_24 . The first sample-optimal algorithm for finding an @math -optimal value is proposed in @cite_24 . @cite_12 gives the first algorithm that finds an @math -optimal policy using the optimal sample complexity @math for all values of @math . For solving MDP using @math linearly additive features, @cite_41 proved a lower bound of sample complexity that is @math . It also provided an algorithm that achieves this lower bound up to log factors, however, their analysis of the algorithm relies heavily on an extra anchor state'' assumption. In @cite_0 , a primal-dual method solving MDP with linear and bilinear representation of value functions and transition models is proposed for the undiscounted MDP. In @cite_18 , the sample complexity of contextual decision process is studied.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_33", "@cite_7", "@cite_8", "@cite_41", "@cite_24", "@cite_44", "@cite_0", "@cite_45", "@cite_2", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "2545659366", "", "", "", "2122701159", "2911793117", "2120678009", "", "2964170525", "", "", "", "", "2890347272" ], "abstract": [ "This paper studies systematic exploration for reinforcement learning (RL) with rich observations and function approximation. We introduce contextual decision processes (CDPs), that unify most prior RL settings. Our first contribution is a complexity measure, the Bellman rank, that we show enables tractable learning of near-optimal behavior in CDPs and is naturally small for many well-studied RL models. Our second contribution is a new RL algorithm that does systematic exploration to learn near-optimal behavior in CDPs with low Bellman rank. The algorithm requires a number of samples that is polynomial in all relevant parameters but independent of the number of unique contexts. Our approach uses Bellman error minimization with optimistic exploration and provides new insights into efficient exploration for RL with function approximation.", "", "", "", "In this paper, we address two issues of long-standing interest in the reinforcement learning literature. First, what kinds of performance guarantees can be made for Q-learning after only a finite number of actions? Second, what quantitative comparisons can be made between Q-learning and model-based (indirect) approaches, which use experience to estimate next-state distributions for off-line value iteration? We first show that both Q-learning and the indirect approach enjoy rather rapid convergence to the optimal policy as a function of the number of state transitions observed. In particular, on the order of only (N log(1 e) e2)(log(N) + loglog(l e)) transitions are sufficient for both algorithms to come within e of the optimal policy, in an idealized model that assumes the observed transitions are \"well-mixed\" throughout an N-state MDP. Thus, the two approaches have roughly the same sample complexity. Perhaps surprisingly, this sample complexity is far less than what is required for the model-based approach to actually construct a good approximation to the next-state distribution. The result also shows that the amount of memory required by the model-based approach is closer to N than to N2. For either approach, to remove the assumption that the observed transitions are well-mixed, we consider a model in which the transitions are determined by a fixed, arbitrary exploration policy. Bounds on the number of transitions required in order to achieve a desired level of performance are then related to the stationary distribution and mixing time of this policy.", "Consider a Markov decision process (MDP) that admits a set of state-action features, which can linearly express the process's probabilistic transition model. We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension @math and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor state-actions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is @math -optimal from any initial state with high probability using @math sample transitions for arbitrarily large-scale MDP with a discount factor @math . A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to polylog factors).", "We consider the problems of learning the optimal action-value function and the optimal policy in discounted-reward Markov decision processes (MDPs). We prove new PAC bounds on the sample-complexity of two well-known model-based reinforcement learning (RL) algorithms in the presence of a generative model of the MDP: value iteration and policy iteration. The first result indicates that for an MDP with N state-action pairs and the discount factor ??[0,1) only O(Nlog(N ?) ((1??)3 ? 2)) state-transition samples are required to find an ?-optimal estimation of the action-value function with the probability (w.p.) 1??. Further, we prove that, for small values of ?, an order of O(Nlog(N ?) ((1??)3 ? 2)) samples is required to find an ?-optimal policy w.p. 1??. We also prove a matching lower bound of ?(Nlog(N ?) ((1??)3 ? 2)) on the sample complexity of estimating the optimal action-value function with ? accuracy. To the best of our knowledge, this is the first minimax result on the sample complexity of RL: the upper bounds match the lower bound in terms of N, ?, ? and 1 (1??) up to a constant factor. Also, both our lower bound and upper bound improve on the state-of-the-art in terms of their dependence on 1 (1??).", "", "", "", "", "", "", "In this paper we consider the problem of computing an ϵ-optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through a generative sampling model that given any state-action pair samples from the transition function in O(1) time. Given such a DMDP with states , actions , discount factor γ∈(0,1), and rewards in range [0,1] we provide an algorithm which computes an ϵ-optimal policy with probability 1−δ where both the run time spent and number of sample taken is upper bounded by O[ | || | (1−γ)3ϵ2 log( | || | (1−γ)δϵ )log( 1 (1−γ)ϵ )] . For fixed values of ϵ, this improves upon the previous best known bounds by a factor of (1−γ)−1 and matches the sample complexity lower bounds proved in azar2013minimax up to logarithmic factors. We also extend our method to computing ϵ-optimal policies for finite-horizon MDP with a generative model and provide a nearly matching sample complexity lower bound." ] }
1906.00423
2946912408
Consider a two-player zero-sum stochastic game where the transition function can be embedded in a given feature space. We propose a two-player Q-learning algorithm for approximating the Nash equilibrium strategy via sampling. The algorithm is shown to find an @math -optimal strategy using sample size linear to the number of features. To further improve its sample efficiency, we develop an accelerated algorithm by adopting techniques such as variance reduction, monotonicity preservation and two-sided strategy approximation. We prove that the algorithm is guaranteed to find an @math -optimal strategy using no more than @math samples with high probability, where @math is the number of features and @math is a discount factor. The sample, time and space complexities of the algorithm are independent of original dimensions of the game.
As for general stochastic games, the minimax Q-learning algorithm and the friend-and-foe Q-learning algorithm is introduced in @cite_37 and @cite_15 , respectively. The Nash Q-learning algorithm is proposed for zero-sum games in @cite_4 and for general-sum games in @cite_40 @cite_23 . Also in @cite_21 , the error of approximate Q-learning is estimated. In @cite_9 , finite-sample analysis of multi-agent reinforcement learning is provided. To our best knowledge, there is no known algorithm that solves 2-TBSG using features with sample complexity analysis.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_9", "@cite_21", "@cite_40", "@cite_23", "@cite_15" ], "mid": [ "1542941925", "2120846115", "2904058351", "1788877992", "1973039793", "", "" ], "abstract": [ "In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.", "We extend Q-learning to a noncooperative multiagent context, using the framework of general-sum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restrictions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learning performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance.", "Despite the increasing interest in multi-agent reinforcement learning (MARL) in the community, understanding its theoretical foundation has long been recognized as a challenging problem. In this work, we make an attempt towards addressing this problem, by providing finite-sample analyses for fully decentralized MARL. Specifically, we consider two fully decentralized MARL settings, where teams of agents are connected by time-varying communication networks, and either collaborate or compete in a zero-sum game, without any central controller. These settings cover many conventional MARL settings in the literature. For both settings, we develop batch MARL algorithms that can be implemented in a fully decentralized fashion, and quantify the finite-sample errors of the estimated action-value functions. Our error analyses characterize how the function class, the number of samples within each iteration, and the number of iterations determine the statistical accuracy of the proposed algorithms. Our results, compared to the finite-sample bounds for single-agent RL, identify the involvement of additional error terms caused by decentralized computation, which is inherent in our decentralized MARL setting. To our knowledge, our work appears to be the first finite-sample analyses for MARL, which sheds light on understanding both the sample and computational efficiency of MARL algorithms.", "This paper provides an analysis of error propagation in Approximate Dynamic Programming applied to zero-sum two-player Stochastic Games. We provide a novel and unified error propagation analysis in Lp-norm of three well-known algorithms adapted to Stochastic Games (namely Approximate Value Iteration, Approximate Policy Iteration and Approximate Generalized Policy Iteration). We show that we can achieve a stationary policy which is 2γe+e′ (1-γ)2 -optimal, where e is the value function approximation error and e′ is the approximate greedy operator error. In addition, we provide a practical algorithm (AGPI-Q) to solve infinite horizon γ-discounted two-player zero-sum Stochastic Games in a batch setting. It is an extension of the Fitted-Q algorithm (which solves Markov Decisions Processes from data) and can be non-parametric. Finally, we demonstrate experimentally the performance of AGPI-Q on a simultaneous two-player game, namely Alesia.", "Markov games are a model of multiagent environments that are convenient for studying multiagent reinforcement learning. This paper describes a set of reinforcement-learning algorithms based on estimating value functions and presents convergence theorems for these algorithms. The main contribution of this paper is that it presents the convergence theorems in a way that makes it easy to reason about the behavior of simultaneous learners in a shared environment.", "", "" ] }
1906.00377
2912317488
High accuracy video label prediction (classification) models are attributed to large scale data. These data could be frame feature sequences extracted by a pre-trained convolutional-neural-network, which promote the efficiency for creating models. Unsupervised solutions such as feature average pooling, as a simple label-independent parameter-free based method, has limited ability to represent the video. While the supervised methods, like RNN, can greatly improve the recognition accuracy. However, the video length is usually long, and there are hierarchical relationships between frames across events in the video, the performance of RNN based models are decreased. In this paper, we proposes a novel video classification method based on a deep convolutional graph neural network (DCGN). The proposed method utilize the characteristics of the hierarchical structure of the video, and performed multi-level feature extraction on the video frame sequence through the graph network, obtained a video representation reflecting the event semantics hierarchically. We test our model on YouTube-8M Large-Scale Video Understanding dataset, and the result outperforms RNN based benchmarks.
Video feature sequence classification is essentially the the task of aggregating video features, that is, to aggregate @math @math -dimensional features into one @math -dimensional feature by mining statistical relationships between these @math features. The aggregated @math -dimensional feature is a highly concentrated embedding, making the classifier easy to mapping the visual embedding space into the label semantic space. It is common using recurrent neural networks, such as LSTM (Long Short-Term Memory Networks) @cite_0 @cite_6 @cite_1 and GRU (Gated recurrent units) @cite_9 @cite_4 , both are the state-of-the-art approaches for many sequence modeling tasks. However, the hidden state of RNN is dependent on previous steps, which prevent parallel computations. Moreover, LSTM or GRU use gate to solve RNN gradient vanish problem, but the sigmoid in the gates still cause gradient decay over layers in depth. It has been shown that LSTM has difficulties in converging when sequence length increase @cite_7 .There also exist end-to-end trainable order-less aggregation methods, such as DBoF(Deep Bag of Frame Pooling) @cite_2 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_2" ], "mid": [ "2180092181", "2791366550", "2157331557", "1923404803", "2116435618", "2064675550", "2524365899" ], "abstract": [ "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call \"percepts\" using Gated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts that are extracted from all level of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts can leads to high-dimensionality video representations. To mitigate this effect and control the model number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler text-decoder model and without extra 3D CNN features.", "Recurrent neural networks (RNNs) have been widely used for processing sequential data. However, RNNs are commonly difficult to train due to the well-known gradient vanishing and exploding problems and hard to learn long-term patterns. Long short-term memory (LSTM) and gated recurrent unit (GRU) were developed to address these problems, but the use of hyperbolic tangent and the sigmoid action functions results in gradient decay over layers. Consequently, construction of an efficiently trainable deep network is challenging. In addition, all the neurons in an RNN layer are entangled together and their behaviour is hard to interpret. To address these problems, a new type of RNN, referred to as independently recurrent neural network (IndRNN), is proposed in this paper, where neurons in the same layer are independent of each other and they are connected across layers. We have shown that an IndRNN can be easily regulated to prevent the gradient exploding and vanishing problems while allowing the network to learn long-term dependencies. Moreover, an IndRNN can work with non-saturated activation functions such as relu (rectified linear unit) and be still trained robustly. Multiple IndRNNs can be stacked to construct a network that is deeper than the existing RNNs. Experimental results have shown that the proposed IndRNN is able to process very long sequences (over 5000 time steps), can be used to construct very deep networks (21 layers used in the experiment) and still be trained robustly. Better performances have been achieved on various tasks by using IndRNNs compared with the traditional RNN and LSTM.", "In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 73.0 ).", "We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.", "Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of 8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics." ] }
1708.05482
2748618075
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01 in F-measure.
Identifying emotion categories in text is one of the key tasks in NLP @cite_29 . Going one step further, emotion cause extraction can reveal important information about what causes a certain emotion and why there is an emotion change . In this section, we introduce related work on emotion analysis including emotion cause extraction.
{ "cite_N": [ "@cite_29" ], "mid": [ "2397482367" ], "abstract": [ "1. Introduction 2. The problem of sentiment analysis 3. Document sentiment classification 4. Sentence subjectivity and sentiment classification 5. Aspect sentiment classification 6. Aspect and entity extraction 7. Sentiment lexicon generation 8. Analysis of comparative opinions 9. Opinion summarization and search 10. Analysis of debates and comments 11. Mining intentions 12. Detecting fake or deceptive opinions 13. Quality of reviews." ] }
1708.05482
2748618075
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01 in F-measure.
Existing work in emotion analysis mostly focuses on emotion classification @cite_6 @cite_23 and emotion information extraction @cite_1 . used a coarse to fine method to classify emotions in Chinese blogs. proposed a joint model to co-train a polarity classifier and an emotion classifier. proposed a Multi-task Gaussian-process based method for emotion classification. used linguistic templates to predict reader's emotions. used an unsupervised method to extract emotion feelers from Bengali blogs. There are other studies which focused on joint learning of sentiments @cite_14 @cite_18 or emotions in tweets or blogs @cite_13 @cite_28 @cite_2 @cite_17 @cite_36 , and emotion lexicon construction @cite_31 @cite_10 @cite_30 . However, the aforementioned work all focused on analysis of emotion expressions rather than emotion causes.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_28", "@cite_36", "@cite_10", "@cite_1", "@cite_6", "@cite_23", "@cite_2", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2136961259", "2250331987", "", "2294062064", "2251878733", "", "39229845", "", "2565649476", "2110432162", "2040467972", "1989145535", "2136201510" ], "abstract": [ "While many lexica annotated with words polarity are available for sentiment analysis, very few tackle the harder task of emotion analysis and are usually quite limited in coverage. In this paper, we present a novel approach for extracting - in a totally automated way - a high-coverage and high-precision lexicon of roughly 37 thousand terms annotated with emotion scores, called DepecheMood. Our approach exploits in an original way 'crowd-sourced' affective annotation implicitly provided by readers of news articles from rappler.com. By providing new state-of-the-art performances in unsupervised settings for regression and classification tasks, even using a na \" ve approach, our experiments show the beneficial impact of harvesting social media data for affective lexicon building.", "Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper aims to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. The model aims to predict a vector of basic human emotions for each sense of the words. The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. We apply the proposed approach to address two main NLP tasks, namely, Indirect yes no Question Answer Pairs inference and Sentiment Orientation prediction. Extensive experiments demonstrate the effectiveness of the proposed approach.", "", "Emotion classification can be generally done from both the writer’s and reader’s perspectives. In this study, we find that two foundational tasks in emotion classification, i.e., reader’s emotion classification on the news and writer’s emotion classification on the comments, are strongly related to each other in terms of coarse-grained emotion categories, i.e., negative and positive. On the basis, we propose a respective way to jointly model these two tasks. In particular, a cotraining algorithm is proposed to improve semi-supervised learning of the two tasks. Experimental evaluation shows the effectiveness of our joint modeling approach.", "Microblog has become a major platform for information about real-world events. Automatically discovering realworld events from microblog has attracted the attention of many researchers. However, most of existing work ignore the importance of emotion information for event detection. We argue that people’s emotional reactions immediately reflect the occurringofreal-worldeventsand shouldbeimportant for event detection. In this study, we focus on the problem of communityrelated event detection by community emotions. To address the problem, we propose a novel framework which include the following three key components: microblog emotion classification, community emotion aggregation and community emotion burst detection. We evaluate our approach on real microblog data sets. Experimental results demonstrate the effectiveness of the proposed framework.", "", "In the past years, there has been a growing interest in developing computational methods for affect detection from text. Although much research has been done in the field, this task still remains far from being solved, as the presence of affect is only in a very small number of cases marked by the presence of emotion-related words. In the rest of the cases, no such lexical clues of emotion are present in text and special commonsense knowledge is necessary in order to interpret the meaning of the situation described and understand its affective connotations. In the light of the challenges posed by the detection of emotions from contexts in which no lexical clue is present, we proposed and implemented a knowledge base – EmotiNet – that stores situations in which specific emotions are felt, represented as “action chains”. Following the initial evaluations, in this chapter, we describe and evaluate two different methods to extend the knowledge contained in EmotiNet: using lexical and ontological knowledge. Results show that such types of knowledge sources are complementary and can help to improve both the precision, as well as the recall of implicit emotion detection systems based on commonsense knowledge.", "", "", "While there have been many attempts to estimate the emotion of an addresser from her his utterance, few studies have explored how her his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by five human workers.", "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help to identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help to obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher interannotator agreement than that obtained by asking if a term evokes an emotion.", "There is plenty of evidence that emotion analysis has many valuable applications. In this study a blog emotion corpus is constructed for Chinese emotional expression analysis. This corpus contains manual annotation of eight emotional categories (expect, joy, love, surprise, anxiety, sorrow, angry and hate), emotion intensity, emotion holder target, emotional word phrase, degree word, negative word, conjunction, rhetoric, punctuation and other linguistic expressions that indicate emotion. Annotation agreement analyses for emotion classes and emotional words and phrases are described. Then, using this corpus, we explore emotion expressions in Chinese and present the analyses on them.", "We present a weakly supervised approach for learning hashtags, hashtag patterns, and phrases associated with five emotions: AFFECTION, ANGER RAGE, FEAR ANXIETY, JOY, and SADNESS DISAPPOINTMENT. Starting with seed hashtags to label an initial set of tweets, we train emotion classifiers and use them to learn new emotion hashtags and hashtag patterns. This process then repeats in a bootstrapping framework. Emotion phrases are also extracted from the learned hashtags and used to create phrase-based emotion classifiers. We show that the learned set of emotion indicators yields a substantial improvement in F-scores, ranging from + 5 to + 18 over baseline classifiers." ] }
1708.05482
2748618075
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01 in F-measure.
first proposed a task on emotion cause extraction. They manually constructed a corpus from the Academia Sinica Balanced Chinese Corpus. Based on this corpus, proposed a rule based method to detect emotion causes based on manually define linguistic rules. Some studies @cite_3 @cite_15 @cite_0 extended the rule based method to informal text in Weibo text (Chinese tweets).
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_3" ], "mid": [ "1992605069", "2093862925", "2161624371" ], "abstract": [ "We develop a rule-based system that trigger emotions based on the emotional model.We extract the corresponding cause events in fine-grained emotions.We get the proportions of different cause components under different emotions.The language features and Bayesian probability are used in this paper. Emotion analysis and emotion cause extraction are key research tasks in natural language processing and public opinion mining. This paper presents a rule-based approach to emotion cause component detection for Chinese micro-blogs. Our research has important scientific values on social network knowledge discovery and data mining. It also has a great potential in analyzing the psychological processes of consumers. Firstly, this paper proposes a rule-based system underlying the conditions that trigger emotions based on an emotional model. Secondly, this paper extracts the corresponding cause events in fine-grained emotions from the results of events, actions of agents and aspects of objects. Meanwhile, it is reasonable to get the proportions of different cause components under different emotions by constructing the emotional lexicon and identifying different linguistic features, and the proposed approach is based on Bayesian probability. Finally, this paper presents the experiments on an emotion corpus of Chinese micro-blogs. The experimental results validate the feasibility of the approach. The existing problems and the further works are also present at the end.", "", "To identify the cause of emotion is a new challenge for researchers in nature language processing. Currently, there is no existing works on emotion cause detection from Chinese micro-blogging (Weibo) text. In this study, an emotion cause annotated corpus is firstly designed and developed through anno- tating the emotion cause expressions in Chinese Weibo Text. Up to now, an emotion cause annotated corpus which consists of the annotations for 1,333 Chinese Weibo is constructed. Based on the observations on this corpus, the characteristics of emotion cause expression are identified. Accordingly, a rule- based emotion cause detection method is developed which uses 25 manually complied rules. Furthermore, two machine learning based cause detection me- thods are developed including a classification-based method using support vec- tor machines and a sequence labeling based method using conditional random fields model. It is the largest available resources in this research area. The expe- rimental results show that the rule-based method achieves 68.30 accuracy rate. Furthermore, the method based on conditional random fields model achieved 77.57 accuracy which is 37.45 higher than the reference baseline method. These results show the effectiveness of our proposed emotion cause detection method." ] }
1708.05509
2747543643
Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http: make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.
Generative Adversarial Network (GAN) @cite_12 , proposed by , shows impressive results in image generation @cite_25 , image transfer @cite_8 , super-resolution @cite_26 and many other generation tasks. The essence of GAN can be summarized as training a model and a model simultaneously, where the discriminator model tries to distinguish the real example, sampled from ground-truth images, from the samples generated by the generator. On the other hand, the generator tries to produce realistic samples that the discriminator is unable to distinguish from the ground-truth samples. Above idea can be described as an that applied to both generator and discriminator in the actual training process, which effectively encourages outputs of the generator to be similar to the original data distribution.
{ "cite_N": [ "@cite_26", "@cite_25", "@cite_12", "@cite_8" ], "mid": [ "2523714292", "2173520492", "", "2552465644" ], "abstract": [ "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either." ] }
1708.05509
2747543643
Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http: make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.
Although the training process is quiet simple, optimizing such models often lead to , in which the generator will always produce the same image. To train GANs stably, @cite_9 suggests rendering Discriminator omniscient whenever necessary. By learning a loss function to separate generated samples from their real examples, LS-GAN @cite_14 focuses on improving poor generation result and thus avoids mode collapse. More detailed discussion on the difficulty in training GAN will be in Section .
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2953246223", "2580360036" ], "abstract": [ "We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.", "In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks." ] }
1708.05509
2747543643
Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http: make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.
Many variants of GAN have been proposed for generating images. @cite_25 applied convolutional neural network in GAN to generate images from latent vector inputs. Instead of generating images from latent vectors, serval methods use the same adversarial idea for generating images with more meaningful input. Mirza & introduced Conditional Generative Adversarial Nets @cite_22 using the image class label as a conditional input to generate MNIST numbers in particular class. @cite_16 further employed encoded text as input to produce images that match the text description. Instead of only feeding conditional information as the input, proposed ACGAN @cite_17 , which also train the discriminator as an auxiliary classifier to predict the condition input.
{ "cite_N": [ "@cite_22", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2125389028", "2949999304", "2173520492", "2950776302" ], "abstract": [ "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data." ] }
1708.05096
2750197405
For many, this is no longer a valid question and the case is considered settled with SDN NFV (Software Defined Networking Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.
In a recent and most comprehensive survey of SDN and virtualization research for LTE mobile networks @cite_51 , the authors have provided a general overview of SDN and virtualization technologies and their respective benefits. They have developed a taxonomy to survey the research space based on the elements of modern cellular systems, e.g., access network, core network, and backhaul. Within each class, the author further classified the material in terms of relevant topics, such as, resource virtualization, resource abstraction, mobility management, etc. They have also looked at the use cases in each class. It is the most comprehensive survey one could find in the radio access network research relevant to SDN. The thrust of the survey is complementary to the present paper. If the readers want better understanding about the material covered in Section , they are recommended to read @cite_51 . On the other hand, the open challenges briefly discussed at the end of @cite_51 and the relevant work under each challenge are discussed in detail in the present paper.
{ "cite_N": [ "@cite_51" ], "mid": [ "1796714434" ], "abstract": [ "Software-defined networking (SDN) features the decoupling of the control plane and data plane, a programmable network and virtualization, which enables network infrastructure sharing and the \"softwarization\" of the network functions. Recently, many research works have tried to redesign the traditional mobile network using two of these concepts in order to deal with the challenges faced by mobile operators, such as the rapid growth of mobile traffic and new services. In this paper, we first provide an overview of SDN, network virtualization, and network function virtualization, and then describe the current LTE mobile network architecture as well as its challenges and issues. By analyzing and categorizing a wide range of the latest research works on SDN and virtualization in LTE mobile networks, we present a general architecture for SDN and virtualization in mobile networks (called SDVMN) and then propose a hierarchical taxonomy based on the different levels of the carrier network. We also present an in-depth analysis about changes related to protocol operation and architecture when adopting SDN and virtualization in mobile networks. In addition, we list specific use cases and applications that benefit from SDVMN. Last but not least, we discuss the open issues and future research directions of SDVMN." ] }
1708.05096
2750197405
For many, this is no longer a valid question and the case is considered settled with SDN NFV (Software Defined Networking Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.
Another recent survey @cite_120 briefly surveys all technologies and applications associated with 5G. The survey also touches upon SDN and only superficially covers some research work under the theme. A more in-depth analysis of some of SDN-based mobile network architectures, i.e., @cite_34 @cite_91 @cite_146 @cite_137 @cite_152 @cite_111 @cite_118 @cite_28 @cite_188 , are presented in @cite_49 in terms of ideas presented in the proposals and their limitations. The survey in @cite_115 looks at the proposals for softwarization and cloudification of cellular networks in terms of optimization and provisions for energy harvesting for sustainable future. The gaps in the technologies are also identified. All of the above mentioned surveys, however, have a broader scope than just SDN-based mobile network architecture and they have only looked at some SDN papers appropriate for the major theme of their survey papers.
{ "cite_N": [ "@cite_188", "@cite_118", "@cite_115", "@cite_91", "@cite_152", "@cite_28", "@cite_120", "@cite_137", "@cite_111", "@cite_146", "@cite_49", "@cite_34" ], "mid": [ "2153556974", "", "2770870731", "2104347415", "2104705987", "", "2774135771", "", "", "", "2610494282", "2095902375" ], "abstract": [ "Telecommunications networks are undergoing major changes so as to meet the requirements of the next generation of users and services, which create a need for a general revised architectural approach rather than a series of local and incremental technology updates. This is especially manifest in mobile broadband wireless access, where a major traffic increase is expected, mostly because of video transmission and cloud-based applications. The installation of a high number of very small cells is foreseen as the only practical way to achieve the demands. However, this would create a struggle on the mobile network operators because of the limited backhaul capacity, the increased energy consumption, and the explosion of signalling. In the FP7 project CROWD, Software Defined Networking (SDN) has been identified as a solution to tame extreme density of wireless networks. Following this paradigm, a novel network architecture accounting for MAC control and Mobility Management has been proposed, being the subject of this paper.", "", "Due to the tremendous growth in mobile data traffic, cellular networks are witnessing architectural evolutions. Future cellular networks are expected to be extremely dense and complex systems, supporting a high variety of end devices (e.g., smartphone, sensors, machines) with very diverse QoS requirements. Such an amount of network and end-user devices will consume a high percentage of electricity from the power grid to operate, thus increasing the carbon footprint and the operational expenditures of mobile operators. Therefore, environmental and economical sustainability have been included in the roadmap toward a proper design of the next-generation cellular system. This paper focuses on softwarization paradigm, energy harvesting technologies, and optimization tools as enablers of future cellular networks for achieving diverse system requirements, including energy saving. This paper surveys the state-of-the-art literature embedding softwarization paradigm in densely deployed radio access network (RAN). In addition, the need for energy harvesting technologies in a densified RAN is provided with the review of the state-of-the-art proposals on the interaction between softwarization and energy harvesting technology. Moreover, the role of optimization tools, such as machine learning, in future RAN with densification paradigm is stated. We have classified the available literature that balances these three pillars, namely, softwarization, energy harvesting, and optimization with densification, being a common RAN deployment trend. Open issues that require further research efforts are also included.", "In the past couple of years we've seen quite a change in the wireless industry: Handsets have become mobile computers running user-contributed applications on (potentially) open operating systems. It seems we are on a path towards a more open ecosystem; one that has been previously closed and proprietary. The biggest winners are the users, who will have more choice among competing, innovative ideas. The same cannot be said for the wireless network infrastructure, which remains closed and (mostly) proprietary, and where innovation is bogged down by a glacial standards process. Yet as users, we are surrounded by abundant wireless capacity and multiple wireless networks (WiFi and cellular), with most of the capacity off-limits to us. It seems industry has little incentive to change, preferring to hold onto control as long as possible, keeping an inefficient and closed system in place. This paper is a \"call to arms\" to the research community to help move the network forward on a path to greater openness. We envision a world in which users can move freely between any wireless infrastructure, while providing payment to infrastructure owners, encouraging continued investment. We think the best path to get there is to separate the network service from the underlying physical infrastructure, and allow rapid innovation of network services, contributed by researchers, network operators, equipment vendors and third party developers. We propose to build and deploy an open - but backward compatible - wireless network infrastructure that can be easily deployed on college campuses worldwide. Through virtualization, we allow researchers to experiment with new network services directly in their production network.", "Wireless networks have evolved from 1G to 4G networks, allowing smart devices to become important tools in daily life. The 5G network is a revolutionary technology that can change consumers' Internet use habits, as it creates a truly wireless environment. It is faster, with better quality, and is more secure. Most importantly, users can truly use network services anytime, anywhere. With increasing demand, the use of bandwidth and frequency spectrum resources is beyond expectations. This paper found that the frequency spectrum and network information have considerable relevance; thus, spectrum utilization and channel flow interactions should be simultaneously considered. We considered that software defined radio (SDR) and software defined networks (SDNs) are the best solution. We propose a cross-layer architecture combining SDR and SDN characteristics. As the simulation evaluation results suggest, the proposed architecture can effectively use the frequency spectrum and considerably enhance network performance. Based on the results, suggestions are proposed for follow-up studies on the proposed architecture.", "", "The new upcoming technology of the fifth generation wireless mobile network is advertised as lightning speed internet, everywhere, for everything, for everyone in the nearest future. There are a lot of efforts and research carrying on many aspects, e.g. millimetre wave (mmW) radio transmission, massive multiple input and multiple output (Massive-MIMO) new antenna technology, the promising technique of SDN architecture, Internet of Thing (IoT) and many more. In this brief survey, we highlight some of the most recent developments towards the 5G mobile network.", "", "", "", "The tremendous growth in communication technology is shaping a hyper-connected network where billions or connected devices are producing a huge volume of data. Cellular and mobile network is a major contributor towards this technology shift and require new architectural paradigm to provide low latency, high performance in a resource constrained environment. 5G technology deployment with fully IP-based connectivity is anticipated by 2020. However, there is no standard established for 5G technology and many efforts are being made to establish a unified 5G stander. In this context, variant technology such as Software Defined Network (SDN) and Network Function virtualization (NFV) are the best candidate. SDN dissociate control plane from data plane and network management is done on the centralized control plane. In this paper, a survey on state of the art on the 5G integration with the SDN is presented. A comprehensive review is presented for the different integrated architectures of 5G wireless network and the generalized solutions over the period 2010–2016. This comparative analysis of the existing solutions of SDN-based cellular network (5G) implementations provides an easy and concise view of the emerging trends by 2020.", "We present OpenRadio, a novel design for a programmable wireless dataplane that provides modular and declarative programming interfaces across the entire wireless stack. Our key conceptual contribution is a principled refactoring of wireless protocols into processing and decision planes. The processing plane includes directed graphs of algorithmic actions (eg. 54Mbps OFDM WiFi or special encoding for video). The decision plane contains the logic which dictates which directed graph is used for a particular packet (eg. picking between data and video graphs). The decoupling provides a declarative interface to program the platform while hiding all underlying complexity of execution. An operator only expresses decision plane rules and corresponding processing plane action graphs to assemble a protocol. The scoped interface allows us to build a dataplane that arguably provides the right tradeoff between performance and flexibility. Our current system is capable of realizing modern wireless protocols (WiFi, LTE) on off-the-shelf DSP chips while providing flexibility to modify the PHY and MAC layers to implement protocol optimizations." ] }
1708.05137
2747668150
We propose a novel video object segmentation algorithm based on pixel-level matching using Convolutional Neural Networks (CNN). Our network aims to distinguish the target area from the background on the basis of the pixel-level similarity between two object units. The proposed network represents a target object using features from different depth layers in order to take advantage of both the spatial details and the category-level semantic information. Furthermore, we propose a feature compression technique that drastically reduces the memory requirements while maintaining the capability of feature representation. Two-stage training (pre-training and fine-tuning) allows our network to handle any target object regardless of its category (even if the object's type does not belong to the pre-training data) or of variations in its appearance through a video sequence. Experiments on large datasets demonstrate the effectiveness of our model - against related methods - in terms of accuracy, speed, and stability. Finally, we introduce the transferability of our network to different domains, such as the infrared data domain.
Most recent approaches @cite_16 @cite_13 @cite_10 @cite_34 @cite_26 @cite_17 separate discriminative objects from a background by optimizing an energy equation under various pixel graph relationships. For instance, fully connected graphs have been proposed in @cite_6 to construct a long range spatio-temporal graph structure robust to challenging situations such as occlusion. In another study @cite_19 , the higher potential term in a supervoxel cluster unit was used to enforce the steadiness of a graph structure. More recently, non-local graph connections were effectively approximated in the bilateral space @cite_24 , which drastically improved the accuracy of segmentation. However, many recent methods are too computationally expensive to deal with long video sequences. They are also greatly affected by cluttered backgrounds, resulting in a drifting effect. Furthermore, many challenges remain partly unsolved, such as large scale variations and dynamic appearance changes. The main reason behind these failure cases is likely poor target appearance representations which do not encompass any semantic level information.
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_34", "@cite_6", "@cite_24", "@cite_19", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "", "2009874829", "2011953904", "2212077366", "2463175074", "589665618", "1963851726", "2117435890", "" ], "abstract": [ "", "In this paper, we propose a technique for video object segmentation using patch seams across frames. Typically, seams, which are connected paths of low energy, are utilised for retargeting, where the primary aim is to reduce the image size while preserving the salient image contents. Here, we adapt the formulation of seams for temporal label propagation. The energy function associated with the proposed video seams provides temporal linking of patches across frames, to accurately segment the object. The proposed energy function takes into account the similarity of patches along the seam, temporal consistency of motion and spatial coherency of seams. Label propagation is achieved with high fidelity in the critical boundary regions, utilising the proposed patch seams. To achieve this without additional overheads, we curtail the error propagation by formulating boundary regions as rough-sets. The proposed approach out-perform state-of-the-art supervised and unsupervised algorithms, on benchmark datasets.", "This paper proposes a probabilistic graphical model for the problem of propagating labels in video sequences, also termed the label propagation problem. Given a limited amount of hand labelled pixels, typically the start and end frames of a chunk of video, an EM based algorithm propagates labels through the rest of the frames of the video sequence. As a result, the user obtains pixelwise labelled video sequences along with the class probabilities at each pixel. Our novel algorithm provides an essential tool to reduce tedious hand labelling of video sequences, thus producing copious amounts of useable ground truth data. A novel application of this algorithm is in semi-supervised learning of discriminative classifiers for video segmentation and scene parsing. The label propagation scheme can be based on pixel-wise correspondences obtained from motion estimation, image patch based similarities as seen in epitomic models or even the more recent, semantically consistent hierarchical regions. We compare the abilities of each of these variants, both via quantitative and qualitative studies against ground truth data. We then report studies on a state of the art Random forest classifier based video segmentation scheme, trained using fully ground truth data and with data obtained from label propagation. The results of this study strongly support and encourage the use of the proposed label propagation algorithm.", "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.", "In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.", "A major challenge in video segmentation is that the foreground object may move quickly in the scene at the same time its appearance and shape evolves over time. While pairwise potentials used in graph-based algorithms help smooth labels between neighboring (super)pixels in space and time, they offer only a myopic view of consistency and can be misled by inter-frame optical flow errors. We propose a higher order supervoxel label consistency potential for semi-supervised foreground segmentation. Given an initial frame with manual annotation for the foreground object, our approach propagates the foreground region through time, leveraging bottom-up supervoxels to guide its estimates towards long-range coherent regions. We validate our approach on three challenging datasets and achieve state-of-the-art results.", "We propose an interactive video segmentation system built on the basis of occlusion and long term spatio-temporal structure cues. User supervision is incorporated in a superpixel graph clustering framework that differs crucially from prior art in that it modifies the graph according to the output of an occlusion boundary detector. Working with long temporal intervals (up to 100 frames) enables our system to significantly reduce annotation effort with respect to state of the art systems. Even though the segmentation results are less than perfect, they are obtained efficiently and can be used in weakly supervised learning from video or for video content description. We do not rely on a discriminative object appearance model and allow extracting multiple foreground objects together, saving user time if more than one object is present. Additional experiments with unsupervised clustering based on occlusion boundaries demonstrate the importance of this cue for video segmentation and thus validate our system design.", "We present an interactive system for efficiently extracting foreground objects from a video. We extend previous min-cut based image segmentation techniques to the domain of video with four new contributions. We provide a novel painting-based user interface that allows users to easily indicate the foreground object across space and time. We introduce a hierarchical mean-shift preprocess in order to minimize the number of nodes that min-cut must operate on. Within the min-cut we also define new local cost functions to augment the global costs defined in earlier work. Finally, we extend 2D alpha matting methods designed for images to work with 3D video volumes. We demonstrate that our matting approach preserves smoothness across both space and time. Our interactive video cutout system allows users to quickly extract foreground objects from video sequences for use in a variety of applications including compositing onto new backgrounds and NPR cartoon style rendering.", "" ] }
1708.05468
2747901247
The privacy-utility tradeoff problem is formulated as determining the privacy mechanism (random mapping) that minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version. The minimization is studied with two types of constraints on the distortion between the public features and the released version of the dataset: (i) subject to a constraint on the expected value of a cost function @math applied to the distortion, and (ii) subject to bounding the complementary CDF of the distortion by a non-increasing function @math . The first scenario captures various practical cost functions for distorted released data, while the second scenario covers large deviation constraints on utility. The asymptotic optimal leakage is derived in both scenarios. For the distortion cost constraint, it is shown that for convex cost functions there is no asymptotic loss in using stationary memoryless mechanisms. For the complementary CDF bound on distortion, the asymptotic leakage is derived for general mechanisms and shown to be the integral of the single letter leakage function with respect to the Lebesgue measure defined based on the refined bound on distortion. However, it is shown that memoryless mechanisms are generally suboptimal in both cases.
An alternative approach to more general distortion constraints is considered in @cite_8 and referred to as footnote 0 We have changed their notation from @math -separable to @math -separable, in order to avoid confusion with our notation. . In @cite_8 , a multi-letter distortion measure @math is defined as @math -separable if for an increasing function @math . The distortion cost constraints that we consider are more general in the sense that our notion of cost function @math applied to the distortion measure @math covers a broader class of distortion constraints than an average bound on @math -separable distortion measures studied in @cite_8 . Specifically, the average constraint on an @math -separable distortion measure has the form which clearly is a specific case for our formulation in that results from choosing @math and @math , such that @math . Moreover, we allow for non-decreasing functions @math , which means that @math does not have to be strictly increasing. We also note that our focus is on privacy rather than source coding.
{ "cite_N": [ "@cite_8" ], "mid": [ "2789706212" ], "abstract": [ "In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature." ] }
1708.05468
2747901247
The privacy-utility tradeoff problem is formulated as determining the privacy mechanism (random mapping) that minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version. The minimization is studied with two types of constraints on the distortion between the public features and the released version of the dataset: (i) subject to a constraint on the expected value of a cost function @math applied to the distortion, and (ii) subject to bounding the complementary CDF of the distortion by a non-increasing function @math . The first scenario captures various practical cost functions for distorted released data, while the second scenario covers large deviation constraints on utility. The asymptotic optimal leakage is derived in both scenarios. For the distortion cost constraint, it is shown that for convex cost functions there is no asymptotic loss in using stationary memoryless mechanisms. For the complementary CDF bound on distortion, the asymptotic leakage is derived for general mechanisms and shown to be the integral of the single letter leakage function with respect to the Lebesgue measure defined based on the refined bound on distortion. However, it is shown that memoryless mechanisms are generally suboptimal in both cases.
In the context of privacy, the privacy utility tradeoff with distinct @math and @math is studied in @cite_23 and more extensively in @cite_6 , but the utility metric is only restricted to identity cost functions, i.e. @math . Generalizing this to the excess distortion constraint was considered by @cite_20 . In @cite_20 , we also differentiated between explicit availability or unavailability of the private data @math to the privacy mechanism. Information theoretic approaches to privacy that are agnostic to the length of the dataset are considered in @cite_25 @cite_16 @cite_19 .
{ "cite_N": [ "@cite_6", "@cite_19", "@cite_23", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2077145963", "2963634943", "2071342505", "1622686296", "2088517895", "2587813977" ], "abstract": [ "Ensuring the usefulness of electronic data sources while providing necessary privacy guarantees is an important unsolved problem. This problem drives the need for an analytical framework that can quantify the privacy of personally identifiable information while still providing a quantifiable benefit (utility) to multiple legitimate information consumers. This paper presents an information-theoretic framework that promises an analytical model guaranteeing tight bounds of how much utility is possible for a given level of privacy and vice-versa. Specific contributions include: 1) stochastic data models for both categorical and numerical data; 2) utility-privacy tradeoff regions and the encoding (sanization) schemes achieving them for both classes and their practical relevance; and 3) modeling of prior knowledge at the user and or data source and optimal encoding schemes for both cases.", "A privacy-constrained information extraction problem is considered where for a pair of correlated discrete random variables (X,Y) governed by a given joint distribution, an agent observes Y and wants to convey to a potentially public user as much information about Y as possible while limiting the amount of information revealed about X. To this end, the so-called rate-privacy function is investigated to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from Y under a privacy constraint between X and the extracted information, where privacy is measured using either mutual information or maximal correlation. Properties of the rate-privacy function are analyzed and its information-theoretic and estimation-theoretic interpretations are presented for both the mutual information and maximal correlation privacy measures. It is also shown that the rate-privacy function admits a closed-form expression for a large family of joint distributions of (X,Y). Finally, the rate-privacy function under the mutual information privacy measure is considered for the case where (X,Y) has a joint probability density function by studying the problem where the extracted information is a uniform quantization of Y corrupted by additive Gaussian noise. The asymptotic behavior of the rate-privacy function is studied as the quantization resolution grows without bound and it is observed that not all of the properties of the rate-privacy function carry over from the discrete to the continuous case.", "A new source coding problem is considered for a one-way communication system with correlated source outputs XY . One of the source outputs, i.e., X , must be transmitted to the receiver within a prescribed distortion tolerance as in ordinary source coding. On the other hand, the other source output, i.e., Y , has to be kept as secret as possible from the receiver or wiretappers. For this case the equivocation-distortion function (d) and the rate-distortion-equivocation function R (d,e) are defined and evaluated. The former is the maximum achievable equivocation of Y under the distortion tolerance d for X , and the latter is the minimum rate necessary to attain both the equivocation tolerance e for Y and the distortion tolerance d for X . Some examples are included.", "We investigate the problem of intentionally disclosing information about a set of measurement points X (useful information), while guaranteeing that little or no information is revealed about a private variable S (private information). Given that S and X are drawn from a finite set with joint distribution pS,X, we prove that a non-trivial amount of useful information can be disclosed while not disclosing any private information if and only if the smallest principal inertia component of the joint distribution of S and X is 0. This fundamental result characterizes when useful information can be privately disclosed for any privacy metric based on statistical dependence. We derive sharp bounds for the tradeoff between disclosure of useful and private information, and provide explicit constructions of privacy-assuring mappings that achieve these bounds.", "We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.", "The tradeoff between privacy and utility is studied for small datasets using tools from fixed error asymptotics in information theory. The problem is formulated as determining the privacy mechanism (random mapping) which minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version, subject to a distortion constraint between the public features and the released version. An excess probability bound is used to constrain the distortion, thus limiting the random variation in distortion due to the finite length. Bounds are derived for the following variants of the problem: (1) whether the mechanism is memoryless (local privacy) or not (global privacy), (2) whether the privacy mechanism has direct access to the private data or not. It is shown that these settings yield different performance in the first order: for global privacy, the first-order leakage decreases with the excess probability, whereas for local privacy it remains constant. The derived bounds also provide tight performance results up to second order for local privacy, as well as bounds on the second order term for global privacy." ] }
1708.05468
2747901247
The privacy-utility tradeoff problem is formulated as determining the privacy mechanism (random mapping) that minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version. The minimization is studied with two types of constraints on the distortion between the public features and the released version of the dataset: (i) subject to a constraint on the expected value of a cost function @math applied to the distortion, and (ii) subject to bounding the complementary CDF of the distortion by a non-increasing function @math . The first scenario captures various practical cost functions for distorted released data, while the second scenario covers large deviation constraints on utility. The asymptotic optimal leakage is derived in both scenarios. For the distortion cost constraint, it is shown that for convex cost functions there is no asymptotic loss in using stationary memoryless mechanisms. For the complementary CDF bound on distortion, the asymptotic leakage is derived for general mechanisms and shown to be the integral of the single letter leakage function with respect to the Lebesgue measure defined based on the refined bound on distortion. However, it is shown that memoryless mechanisms are generally suboptimal in both cases.
In @cite_20 , we also allow the mechanisms to be either memoryless (also referred to as ) or general. This approach has also been considered in the context of differential privacy (DP) (see for example @cite_21 @cite_24 @cite_18 @cite_4 @cite_10 ). In the information theoretic context, it is useful to understand how memoryless mechanisms behave for more general distortion constraints as considered here. Furthermore, even less is known about how general mechanisms behave and that is what this paper aims to do.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_24", "@cite_10", "@cite_20" ], "mid": [ "2245160765", "2551592225", "2507229079", "2013823004", "", "2587813977" ], "abstract": [ "Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.", "Local differential privacy has recently surfaced as a strong measure of privacy in contexts where personal information remains private even from data analysts. Working in a setting where both the data providers and data analysts want to maximize the utility of statistical analyses performed on the released data, we study the fundamental trade-off between local differential privacy and utility. This trade-off is formulated as a constrained optimization problem: maximize utility subject to local differential privacy constraints. We introduce a combinatorial family of extremal privatization mechanisms, which we call staircase mechanisms, and show that it contains the optimal privatization mechanisms for a broad class of information theoretic utilities such as mutual information and f-divergences. We further prove that for any utility function and any privacy level, solving the privacy-utility maximization problem is equivalent to solving a finite-dimensional linear program, the outcome of which is the optimal staircase mechanism. However, solving this linear program can be computationally expensive since it has a number of variables that is exponential in the size of the alphabet the data lives in. To account for this, we show that two simple privatization mechanisms, the binary and randomized response mechanisms, are universally optimal in the low and high privacy regimes, and well approximate the intermediate regime.", "We examine a tradeoff between privacy and utility in terms of local differential privacy (L-DP) and Hamming distortion for certain classes of finite-alphabet sources under Hamming distortion. We define two classes: permutation-invariant, and ordered statistics (whose probability mass functions are monotonic). We obtain the optimal L-DP mechanism for permutation-invariant sources and derive upper and lower bounds on the achievable local differential privacy for ordered statistics for a range of target distortion values.", "Abstract For various reasons individuals in a sample survey may prefer not to confide to the interviewer the correct answers to certain questions. In such cases the individuals may elect not to reply at all or to reply with incorrect answers. The resulting evasive answer bias is ordinarily difficult to assess. In this paper it is argued that such bias is potentially removable through allowing the interviewee to maintain privacy through the device of randomizing his response. A randomized response method for estimating a population proportion is presented as an example. Unbiased maximum likelihood estimates are obtained and their mean square errors are compared with the mean square errors of conventional estimates under various assumptions about the underlying population.", "", "The tradeoff between privacy and utility is studied for small datasets using tools from fixed error asymptotics in information theory. The problem is formulated as determining the privacy mechanism (random mapping) which minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version, subject to a distortion constraint between the public features and the released version. An excess probability bound is used to constrain the distortion, thus limiting the random variation in distortion due to the finite length. Bounds are derived for the following variants of the problem: (1) whether the mechanism is memoryless (local privacy) or not (global privacy), (2) whether the privacy mechanism has direct access to the private data or not. It is shown that these settings yield different performance in the first order: for global privacy, the first-order leakage decreases with the excess probability, whereas for local privacy it remains constant. The derived bounds also provide tight performance results up to second order for local privacy, as well as bounds on the second order term for global privacy." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Synthesis with CNNs: Convolutional Neural Networks (CNNs) have enjoyed great success for various discriminative pixel-level tasks such as segmentation @cite_9 @cite_45 , depth and surface normal estimation @cite_25 @cite_9 @cite_1 @cite_37 , semantic boundary detection @cite_9 @cite_34 etc. Such networks are usually trained using standard losses (such as softmax or @math regression) on image-label data pairs. However, such networks do not typically perform well for the inverse problem of image synthesis from a (incomplete) label, though exceptions do exist @cite_33 . A major innovation was the introduction of adversarially-trained generative networks (GANs) @cite_10 . This formulation was hugely influential in computer visions, having been applied to various image generation tasks that condition on a low-resolution image @cite_44 @cite_43 , segmentation mask @cite_11 , surface normal map @cite_32 and other inputs @cite_4 @cite_6 @cite_31 @cite_23 @cite_8 @cite_14 . Most related to us is @cite_11 who propose a general loss function for adversarial learning, applying it to a diverse set of image synthesis tasks.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_4", "@cite_33", "@cite_8", "@cite_9", "@cite_1", "@cite_32", "@cite_6", "@cite_44", "@cite_43", "@cite_45", "@cite_23", "@cite_31", "@cite_34", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2951713345", "2962793481", "2434741482", "2741768657", "2964024144", "2593915460", "", "2298992465", "2952010110", "2951523806", "2523714292", "", "2949551726", "2173520492", "", "1710476689", "", "2552465644" ], "abstract": [ "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.", "We present an approach to synthesizing photographic images conditioned on semantic layouts. Given a semantic label map, our approach produces an image with photographic appearance that conforms to the input layout. The approach thus functions as a rendering engine that takes a two-dimensional semantic specification of the scene and produces a corresponding photographic image. Unlike recent and contemporaneous work, our approach does not rely on adversarial training. We show that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective. The presented approach scales seamlessly to high resolutions; we demonstrate this by synthesizing photographic images at 2-megapixel resolution, the full resolution of our training data. Extensive perceptual experiments on datasets of outdoor and indoor scenes demonstrate that images synthesized by the presented approach are considerably more realistic than alternative approaches. The results are shown in the supplementary video at this https URL", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.", "We explore design principles for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that stratified sampling of pixels allows one to (1) add diversity during batch updates, speeding up learning; (2) explore complex nonlinear predictors, improving accuracy; and (3) efficiently train state-of-the-art models tabula rasa (i.e., \"from scratch\") for diverse pixel-labeling tasks. Our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset, and edge detection on BSDS.", "", "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.", "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "", "For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”", "", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Interpretability and user-control: Interpreting and explaining the outputs of generative deep networks is an open problem. As a community, we do not have a clear understanding of what, where, and how outputs are generated. Our work is fundamentally based on information via nearest neighbors, which explicitly reveals how each pixel-level output is generated (by in turn revealing where it was copied from). This makes our synthesized outputs quite interpretable. One important consequence is the ability to intuitively edit and control the process of synthesis. @cite_38 provide a user with controls for editing image such as color, and outline. But instead of using a predefined set of editing operations, we allow a user to have an arbitrarily -fine level of control through on-the-fly editing of the exemplar set (E.g., resynthesize an image using the eye from this image and the nose from that one'').
{ "cite_N": [ "@cite_38" ], "mid": [ "2951021768" ], "abstract": [ "Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to \"fall off\" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Correspondence: An important byproduct of pixelwise NN is the generation of pixelwise correspondences between the synthesized output and training examples. Establishing such pixel-level correspondence has been one of the core challenges in computer vision @cite_29 @cite_17 @cite_51 @cite_20 @cite_50 @cite_30 @cite_12 . @cite_18 use SIFT flow @cite_51 to hallucinate details for image super-resolution. @cite_12 propose a CNN to predict appearance flow that can be used to transfer information from input views to synthesize a new view. @cite_17 generate 3D reconstructions by training a CNN to learn correspondence between object instances. Our work follows from the crucial observation of @cite_20 , who suggest that features from pre-trained convnets can also be used for pixel-level correspondences. In this work, we make an additional empirical observation: hypercolumn features trained for semantic segmentation learn nuances and details better than one trained for image classification. This finding helped us to establish semantic correspondences between the pixels in query and training images, and enabled us to extract high-frequency information from the training examples to synthesize a new image from a given input.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_29", "@cite_50", "@cite_51", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2474531669", "", "2435623039", "2179888134", "", "", "2950124505", "2952695679" ], "abstract": [ "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.", "", "A computer-implemented method for training a convolutional neural network (CNN) is presented. The method includes extracting coordinates of corresponding points in the first and second locations, identifying positive points in the first and second locations, identifying negative points in the first and second locations, training features that correspond to positive points of the first and second locations to move closer to each other, and training features that correspond to negative points in the first and second locations to move away from each other.", "We propose a deep learning approach for finding dense correspondences between 3D scans of people. Our method requires only partial geometric information in the form of two depth maps or partial reconstructed surfaces, works for humans in arbitrary poses and wearing any clothing, does not require the two people to be scanned from similar viewpoints, and runs in real time. We use a deep convolutional neural network to train a feature descriptor on depth map pixels, but crucially, rather than training the network to solve the shape correspondence problem directly, we train it to solve a body region classification problem, modified to increase the smoothness of the learned descriptors near region boundaries. This approach ensures that nearby points on the human body are nearby in feature space, and vice versa, rendering the feature descriptor suitable for computing dense correspondences between the scans. We validate our method on real and synthetic data for both clothed and unclothed humans, and show that our correspondences are more robust than is possible with state-of-the-art unsupervised methods, and more accurate than those found using methods that require full watertight 3D geometry.", "", "", "Convolutional neural nets (convnets) trained from massive labeled datasets have substantially improved the state-of-the-art in image classification and object detection. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass alignment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011.", "We present an approach to matching images of objects in fine-grained datasets without using part annotations, with an application to the challenging problem of weakly supervised single-view reconstruction. This is in contrast to prior works that require part annotations, since matching objects across class and pose variations is challenging with appearance features alone. We overcome this challenge through a novel deep learning architecture, WarpNet, that aligns an object in one image with a different object in another. We exploit the structure of the fine-grained dataset to create artificial data for training this network in an unsupervised-discriminative learning approach. The output of the network acts as a spatial prior that allows generalization at test time to match real images across variations in appearance, viewpoint and articulation. On the CUB-200-2011 dataset of bird categories, we improve the AP over an appearance-only network by 13.6 . We further demonstrate that our WarpNet matches, together with the structure of fine-grained datasets, allow single-view reconstructions with quality comparable to using annotated point correspondences." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Nonparametrics: Our work closely follows data-driven approaches that make use of nearest neighbors @cite_7 @cite_49 @cite_40 @cite_27 @cite_36 @cite_19 . Hays and Efros @cite_49 match a query image to 2 million training images for various tasks such as image completion. We make use of dramatically smaller training sets by allowing for compositional matches. @cite_48 propose a two-step pipeline for face hallucination where global constraints capture overall structure, and local constraints produce photorealistic local features. While they focus on the task of facial super-resolution, we address variety of synthesis applications. Final, our compositional approach is inspired by Boiman and Irani @cite_39 @cite_21 , who reconstruct a query image via compositions of training examples.
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_48", "@cite_21", "@cite_39", "@cite_19", "@cite_27", "@cite_40", "@cite_49" ], "mid": [ "", "1966641700", "2003749430", "2161017465", "2150145411", "2035652042", "2123576187", "", "2171011251" ], "abstract": [ "", "In this paper, we investigate whether it is possible to develop a measure that quantifies the naturalness of human motion (as defined by a large database). Such a measure might prove useful in verifying that a motion editing operation had not destroyed the naturalness of a motion capture clip or that a synthetic motion transition was within the space of those seen in natural human motion. We explore the performance of mixture of Gaussians (MoG), hidden Markov models (HMM), and switching linear dynamic systems (SLDS) on this problem. We use each of these statistical models alone and as part of an ensemble of smaller statistical models. We also implement a Naive Bayes (NB) model for a baseline comparison. We test these techniques on motion capture data held out from a database, keyframed motions, edited motions, motions with noise added, and synthetic motion transitions. We present the results as receiver operating characteristic (ROC) curves and compare the results to the judgments made by subjects in a user study.", "In this paper, we study face hallucination, or synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images. Our theoretical contribution is a two-step statistical modeling approach that integrates both a global parametric model and a local nonparametric model. At the first step, we derive a global linear model to learn the relationship between the high-resolution face images and their smoothed and down-sampled lower resolution ones. At the second step, we model the residue between an original high-resolution image and the reconstructed high-resolution image after applying the learned linear model by a patch-based non-parametric Markov network to capture the high-frequency content. By integrating both global and local models, we can generate photorealistic face images. A practical contribution is a robust warping algorithm to align the low-resolution face images to obtain good hallucination results. The effectiveness of our approach is demonstrated by extensive experiments generating high-quality hallucinated face images from low-resolution input with no manual alignment.", "We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term \"irregular\" depends on the context in which the \"regular\" or \"valid\" are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (\"the query\") using chunks of data (\"pieces of puzzle\") extracted from previous visual examples (\"the database \"). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, and for suspicious behavior recognition.", "We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S1 is \"similar\" to a signal S2 if it is \"easy\" to compose S1 from few large contiguous chunks of S2. Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S1 is to S2. This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2. \"Similarity by Composition\" can be applied between pairs of signals, between groups of signals, and also between different portions of the same signal. It can therefore be employed in a wide variety of machine learning problems (clustering, classification, retrieval, segmentation, attention, saliency, labelling, etc.), and can be applied to a wide range of signal types (images, video, audio, biological data, etc.) We show a few such examples.", "The goal of this work is to find visually similar images even if they appear quite different at the raw pixel level. This task is particularly important for matching images across visual domains, such as photos taken over different seasons or lighting conditions, paintings, hand-drawn sketches, etc. We propose a surprisingly simple method that estimates the relative importance of different features in a query image based on the notion of \"data-driven uniqueness\". We employ standard tools from discriminative object detection in a novel way, yielding a generic approach that does not depend on a particular image representation or a specific visual domain. Our approach shows good performance on a number of difficult cross-domain visual tasks e.g., matching paintings or sketches to real photographs. The method also allows us to demonstrate novel applications such as Internet re-photography, and painting2gps. While at present the technique is too computationally intensive to be practical for interactive image retrieval, we hope that some of the ideas will eventually become applicable to that domain as well.", "Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.", "", "What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Visual Conversational Agents. Our AI agents are visual conversational models, which have recently emerged as a popular research area in visually-grounded language modeling @cite_25 @cite_4 @cite_6 @cite_26 . @cite_25 introduced the task of Visual Dialog and collected the VisDial dataset by pairing subjects on Amazon Mechanical Turk (AMT) to chat about an image (with assigned roles of questioner and answerer). @cite_4 pre-trained questioner and answerer agents on this VisDial dataset via supervised learning and fine-tuned them via self-talk (reinforcement learning), observing that RL-fine-tuned - are better at image-guessing after interacting with each other. However, as described in sec:intro , they do not evaluate if this change in - performance translates to human-AI teams.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_25", "@cite_6" ], "mid": [ "2951357606", "2953119472", "", "2558809543" ], "abstract": [ "End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision is too simplistic to render the intrinsic planning problem inherent to dialogue as well as its grounded nature, making the context of a dialogue larger than the sole history. This is why only chit-chat and question answering tasks have been addressed so far using end-to-end architectures. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on a dataset of 120k dialogues collected through Mechanical Turk and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture.", "We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask answer about certain visual attributes (shape color style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team.", "", "We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Human Computation Games. Human computation games have been shown to be time- and cost-efficient, reliable, intrinsically engaging for participants @cite_23 @cite_29 , and hence an effective method to collect data annotations. There is a long line of work on designing such Games with a Purpose (GWAP) @cite_11 for data labeling purposes across various domains including images @cite_28 @cite_20 @cite_7 @cite_27 , audio @cite_17 @cite_18 , language @cite_15 @cite_1 , movies @cite_24 . While such games have traditionally focused on human-human collaboration, we extend these ideas to human-AI teams. Rather than collecting labeled data, our game is designed to measure the effectiveness of the AI in the context of human-AI teams.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_28", "@cite_29", "@cite_1", "@cite_17", "@cite_24", "@cite_27", "@cite_23", "@cite_15", "@cite_20", "@cite_11" ], "mid": [ "2407683213", "2067329295", "2141282920", "1600300810", "1561665100", "1991313121", "256394748", "2251512949", "", "1974474704", "2080942732", "2035683813" ], "abstract": [ "", "Since its introduction at CHI 2004, the ESP Game has inspired many similar games that share the goal of gathering data from players. This paper introduces a new mechanism for collecting labeled data using \"games with a purpose.\" In this mechanism, players are provided with either the same or a different object, and asked to describe that object to each other. Based on each other's descriptions, players must decide whether they have the same object or not. We explain why this new mechanism is superior for input data with certain characteristics, introduce an enjoyable new game called \"TagATune\" that collects tags for music clips via this mechanism, and present findings on the data that is collected by this game.", "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.", "Motivation has been one of the central challenges of human computation. A promising approach is the integration of human computation tasks into digital games. Different human computation games have been successfully deployed, but tend to provide relatively narrow gaming experiences. This survey discusses various approaches of digital games for human computation and aims to explore the ties to signal processing and possible generalizations.", "Annotated corpora of the size needed for modern computational linguistics research cannot be created by small groups of hand annotators. One solution is to exploit collaborative work on the Web and one way to do this is through games like the ESP game. Applying this methodology however requires developing methods for teaching subjects the rules of the game and evaluating their contribution while maintaining the game entertainment. In addition, applying this method to linguistic annotation tasks like anaphoric annotation requires developing methods for presenting text and identifying the components of the text that need to be annotated. In this paper we present the first version of Phrase Detectives (http: www.phrasedetectives.org), to our knowledge the first game designed for collaborative linguistic annotation on the Web.", "We have developed an audio-based casual puzzle game which produces a time-stamped transcription of spoken audio as a by-product of play. Our evaluation of the game indicates that it is both fun and challenging. The transcripts generated using the game are more accurate than those produced using a standard automatic transcription system and the time-stamps of words are within several hundred milliseconds of ground truth.", "This volume addresses the emerging area of human computation, The chapters, written by leading international researchers, explore existing and future opportunities to combine the respective strengths of both humans and machines in order to create powerful problem-solving capabilities. The book bridges scientific communities, capturing and integrating the unique perspective and achievements of each. It coalesces contributions from industry and across related disciplines in order to motivate, define, and anticipate the future of this exciting new frontier in science and cultural evolution. Readers can expect to find valuable contributions covering Foundations; Application Domains; Techniques and Modalities; Infrastructure and Architecture; Algorithms; Participation; Analysis; Policy and Security and the Impact of Human Computation. Researchers and professionals will find the Handbook of Human Computation a valuable reference tool. The breadth of content also provides a thorough foundation for students of the field.", "In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.", "", "We present a human computation online game for enabling users to contribute to the creation of a corpus of question-resource pairs for harvesting web-based question answering. Our idea was motivated by the popular 'jeopardy' quiz.", "We introduce Peekaboom, an entertaining web-based game that can help computers locate objects in images. People play the game because of its entertainment value, and as a side effect of them playing, we collect valuable image metadata, such as which pixels belong to which object in the image. The collected data could be applied towards constructing more accurate computer vision algorithms, which require massive amounts of training and testing data not currently available. Peekaboom has been played by thousands of people, some of whom have spent over 12 hours a day playing, and thus far has generated millions of data points. In addition to its purely utilitarian aspect, Peekaboom is an example of a new, emerging class of games, which not only bring people together for leisure purposes, but also exist to improve artificial intelligence. Such games appeal to a general audience, while providing answers to problems that computers cannot yet solve.", "Data generated as a side effect of game play also solves computational problems and trains AI algorithms." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Evaluating Conversational Agents. Goal-driven (non-visual) conversational models have typically been evaluated on task-completion rate or time-to-task-completion @cite_5 , so shorter conversations are better. At the other end of the spectrum, free-form conversation models are often evaluated by metrics that rely on n-gram overlaps, such as BLEU, METEOR, ROUGE, but these have been shown to correlate poorly with human judgment @cite_13 . Human evaluation of conversations is typically in the format where humans rate the quality of machine utterances given context, without actually taking part in the conversation, as in @cite_4 and @cite_21 . To the best of our knowledge, we are the first to evaluate conversational models via team performance where humans are continuously interacting with agents to succeed at a downstream task.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_13", "@cite_4" ], "mid": [ "1973021077", "2410983263", "2328886022", "2953119472" ], "abstract": [ "", "Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.", "We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems.", "We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask answer about certain visual attributes (shape color style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Turing Test. Finally, our game is in line with ideas in @cite_19 , re-imagining the traditional Turing Test for state-of-the-art AI systems, taking the pragmatic view that an effective AI teammate need not appear human-like, act or be mistaken for one, provided its behavior does not feel jarring or baffle teammates, leaving them wondering not about what it is thinking but whether it is.
{ "cite_N": [ "@cite_19" ], "mid": [ "1651525653" ], "abstract": [ "In 1950, when Turing proposed to replace the question \"Can machines think?\" with the question \"Are there imaginable digital computers which would do well in the imitation game?\" computer science was not yet a field of study, Shannon’s theory of information had just begun to change the way people thought about communication, and psychology was only starting to look beyond behaviorism. It is stunning that so many predictions in Turing’s 1950 Mind paper were right. In the decades since that paper appeared, with its inspiring challenges, research in computer science, neuroscience, and the behavioral sciences has radically changed thinking about mental processes and communication, and the ways in which people use computers has evolved even more dramatically. Turing, were he writing now, might still replace \"Can machines think?\" with an operational challenge, but it is likely he would propose a very different test. This paper considers what that might be in light of Turing’s paper and advances in the decades since it was written." ] }
1708.05133
2749330229
A growing demand for natural-scene text detection has been witnessed by the computer vision community since text information plays a significant role in scene understanding and image indexing. Deep neural networks are being used due to their strong capabilities of pixel-wise classification or word localization, similar to being used in common vision problems. In this paper, we present a novel two-task network with integrating bottom and top cues. The first task aims to predict a pixel-by-pixel labeling and based on which, word proposals are generated with a canonical connected component analysis. The second task aims to output a bundle of character candidates used later to verify the word proposals. The two sub-networks share base convolutional features and moreover, we present a new loss to strengthen the interaction between them. We evaluate the proposed network on public benchmark datasets and show it can detect arbitrary-orientation scene text with a finer output boundary. In ICDAR 2013 text localization task, we achieve the state-of-the-art performance with an F-score of 0.919 and a much better recall of 0.915.
In this paper, we focus on the use of convolutional neural networks (CNNs) in scene-text detection. It can date back to 2012, when Wang al @cite_7 presented a sliding-window approach to detect individual characters. The convolutional network was being used as a 62-category classifier. With the emergence of dedicated networks for common object detection, applying those models into text problem seems straightforward. In DeepText of Zhong al @cite_16 , they follow the Faster R-CNN @cite_1 to detect words in images. The Region Proposal Network is redesigned with the introduction of multiple sets of convolution and pooling layers. The work of @cite_15 follows another recent network called SSD @cite_18 with implicit proposals. The authors also improve the adaption of the model to text issue by adjusting the network parameters. The major challenge for word detection networks is the great variation of words in aspect ratio and orientation, both of which can significantly reduce the efficiency of word proposals. In the work of Shi al @cite_5 , a Spatial Transformer Network @cite_22 is introduced. By projecting selected landmark points, the problem of rotation and perspective distortion can be partly solved.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_1", "@cite_5", "@cite_15", "@cite_16" ], "mid": [ "2193145675", "2951005624", "1607307044", "2953106684", "", "2550687635", "2395360388" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "", "This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.", "In this paper, we develop a novel unified framework called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the inception region proposal network (Inception-RPN) and design a set of text characteristic prior bounding boxes to achieve high word recall with only hundred level candidate proposals. Next, we present a powerful textdetection network that embeds ambiguous text category (ATC) information and multilevel region-of-interest pooling (MLRP) for text and non-text classification and accurate localization. Finally, we apply an iterative bounding box voting scheme to pursue high recall in a complementary manner and introduce a filtering algorithm to retain the most suitable bounding box, while removing redundant inner and outer boxes for each text instance. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results." ] }
1708.05133
2749330229
A growing demand for natural-scene text detection has been witnessed by the computer vision community since text information plays a significant role in scene understanding and image indexing. Deep neural networks are being used due to their strong capabilities of pixel-wise classification or word localization, similar to being used in common vision problems. In this paper, we present a novel two-task network with integrating bottom and top cues. The first task aims to predict a pixel-by-pixel labeling and based on which, word proposals are generated with a canonical connected component analysis. The second task aims to output a bundle of character candidates used later to verify the word proposals. The two sub-networks share base convolutional features and moreover, we present a new loss to strengthen the interaction between them. We evaluate the proposed network on public benchmark datasets and show it can detect arbitrary-orientation scene text with a finer output boundary. In ICDAR 2013 text localization task, we achieve the state-of-the-art performance with an F-score of 0.919 and a much better recall of 0.915.
Another group of methods are based on image segmentation networks. Zhang al @cite_23 use the Fully Convolutional Network (FCN) @cite_19 to obtain salient maps with the foreground as candidates of text lines. The trouble is that the candidates may stick to each other, and their boundaries are often blurry. To make the final predictions of quadrilateral shape, the authors have to set up some hard constrains regrading intensity and geometry.
{ "cite_N": [ "@cite_19", "@cite_23" ], "mid": [ "2952632681", "2952365771" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
Previous face detection systems are mostly based on hand-craft features. Since the seminal Viola-Jones face detector @cite_23 that proposes to combine Haar feature, Adaboost learning and cascade inference for face detection, many subsequent works are proposed for real-time face detection, such as new local features @cite_33 @cite_40 , new boosting algorithms @cite_16 @cite_28 and new cascade structures @cite_27 @cite_46 .
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_40", "@cite_27", "@cite_23", "@cite_46", "@cite_16" ], "mid": [ "2247274765", "2118373237", "2041497292", "2170110077", "2137401668", "1555563476", "1984525543" ], "abstract": [ "We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes.", "Training a cascade-based face detector using boosting and Haar features is computationally expensive, often requiring weeks on single CPU machines. The bottleneck is at training and selecting Haar features for a single weak classifier, currently in minutes. Traditional techniques for training a weak classifier usually run in 0(NT log N), with N examples (approximately 10,000), and T features (approximately 40,000). We present a method to train a weak classifier in time 0(Nd2 + T), where d is the number of pixels of the probed image sub-window (usually from 350 to 500), by using only the statistics of the weighted input data. Experimental results revealed a significantly reduced training time of a weak classifier to the order of seconds. In particular, this method suffers very minimal immerse in training time with very large increases in members of Haar features, enjoying a significant gain in accuracy, even with reduced training time.", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "We describe a method for training object detectors using a generalization of the cascade architecture, which results in a detection rate and speed comparable to that of the best published detectors while allowing for easier training and a detector with fewer features. In addition, the method allows for quickly calibrating the detector for a target detection rate, false positive rate or speed. One important advantage of our method is that it enables systematic exploration of the ROC surface, which characterizes the trade-off between accuracy and speed for a given classifier.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "A new boosting algorithm, called FloatBoost, is proposed to overcome the monotonicity problem of the sequential AdaBoost learning. AdaBoost [1, 2] is a sequential forward search procedure using the greedy selection strategy. The premise oyered by the sequential procedure can be broken-down when the monotonicity assumption, i.e. that when adding a new feature to the current set, the value of the performance criterion does not decrease, is violated. FloatBoost incorporates the idea of Floating Search [3] into AdaBoost to solve the non-monotonicity problem encountered in the sequential search of AdaBoost.We then present a system which learns to detect multi-view faces using FloatBoost. The system uses a coarse-to-fine, simple-to-complex architecture called detector-pyramid. FloatBoost learns the component detectors in the pyramid and yields similar or higher classification accuracy than AdaBoost with a smaller number of weak classifiers. This work leads to the first real-time multi-view face detection system in the world. It runs at 200 ms per image of size 320x240 pixels on a Pentium-III CPU of 700 MHz. A live demo will be shown at the conference.", "Cascades of boosted ensembles have become popular in the object detection community following their highly successful introduction in the face detector of Viola and Jones. Since then, researchers have sought to improve upon the original approach by incorporating new methods along a variety of axes (e.g. alternative boosting methods, feature sets, etc.). Nevertheless, key decisions about how many hypotheses to include in an ensemble and the appropriate balance of detection and false positive rates in the individual stages are often made by user intervention or by an automatic method that produces unnecessarily slow detectors. We propose a novel method for making these decisions, which exploits the shape of the stage ROC curves in ways that have been previously ignored. The result is a detector that is significantly faster than the one produced by the standard automatic method. When this algorithm is combined with a recycling method for reusing the outputs of early stages in later ones and with a retracing method that inserts new early rejection points in the cascade, the detection speed matches that of the best hand-crafted detector. We also exploit joint distributions over several features in weak learning to improve overall detector accuracy, and explore ways to improve training time by aggressively filtering features." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
Besides the cascade framework, methods based on structural models progressively achieve better performance and become more and more efficient. Some researches @cite_10 @cite_26 @cite_47 introduce the deformable part model (DPM) into face detection tasks. These works use supervised parts, more pose partition, better training or more efficient inference to achieve remarkable detection performance.
{ "cite_N": [ "@cite_47", "@cite_26", "@cite_10" ], "mid": [ "2047508432", "2034025266", "2056025798" ], "abstract": [ "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "Despite the successes in the last two decades, the state-of-the-art face detectors still have problems in dealing with images in the wild due to large appearance variations. Instead of leaving appearance variations directly to statistical learning algorithms, we propose a hierarchical part based structural model to explicitly capture them. The model enables part subtype option to handle local appearance variations such as closed and open month, and part deformation to capture the global appearance variations such as pose and expression. In detection, candidate window is fitted to the structural model to infer the part location and part subtype, and detection score is then computed based on the fitted configuration. In this way, the influence of appearance variation is reduced. Besides the face model, we exploit the co-occurrence between face and body, which helps to handle large variations, such as heavy occlusions, to further boost the face detection performance. We present a phrase based representation for body detection, and propose a structural context model to jointly encode the outputs of face detector and body detector. Benefit from the rich structural face and body information, as well as the discriminative structural learning algorithm, our method achieves state-of-the-art performance on FDDB, AFW and a self-annotated dataset, under wide comparisons with commercial and academic methods. (C) 2013 Elsevier B.V. All rights reserved.", "This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
The first use of CNN for face detection can be traced back to 1994. @cite_19 use a trained CNN in a sliding windows manner to detect faces. @cite_7 @cite_48 introduce a retinally connected neural network for upright frontal face detection, and a router" network designed to estimate the orientation for rotation invariant face detection. @cite_3 develop a neural network to detect semi-frontal faces. @cite_31 train a CNN for simultaneous face detection and pose estimation. These earlier methods can get relatively good performance only on easy dataset.
{ "cite_N": [ "@cite_7", "@cite_48", "@cite_3", "@cite_19", "@cite_31" ], "mid": [ "", "2154376992", "2125653371", "2056695679", "2616465717" ], "abstract": [ "", "In this paper, we present a neural network-based face detection system. Unlike similar systems which are limited to detecting upright, frontal faces, this system detects faces at any degree of rotation in the image plane. The system employs multiple networks; a \"router\" network first processes each input window to determine its orientation and then uses this information to prepare the window for one or more \"detector\" networks. We present the training methods for both types of networks. We also perform sensitivity analysis on the networks, and present empirical results on a large test set. Finally, we present preliminary results for detecting faces rotated out of the image plane, such as profiles and semi-profiles.", "In this paper, we present a connectionist approach for detecting and precisely localizing semi-frontal human faces in complex images, making no assumption about the content or the lighting conditions of the scene, or about the size or the appearance of the faces. We propose a convolutional neural network architecture designed to recognize strongly variable face patterns directly from pixel images with no preprocessing, by automatically synthesizing its own set of feature extractors from a large training set of faces. We present in details the optimized design of our architecture, our learning strategy and the resulting process of face detection. We also provide experimental results to demonstrate the robustness of our approach and its capability to precisely detect extremely variable faces in uncontrolled environments.", "An original approach is presented for the localisation of objects in an image which approach is neuronal and has two steps. In the first step, a rough localisation is performed by presenting each pixel with its neighbourhood to a neural net which is able to indicate whether this pixel and its neighbourhood are the image of the search object. This first filter does not discriminate for position. From its result, areas which might contain an image of the object can be selected. In the second step, these areas are presented to another neural net which can determine the exact position of the object in each area. This algorithm is applied to the problem of localising faces in images.", "We describe a novel method for simultaneously detecting faces and estimating their pose in real time. The method employs a convolutional network to map images of faces to points on a low-dimensional manifold parametrized by pose, and images of non-faces to points far away from that manifold. Given an image, detecting a face and estimating its pose is viewed as minimizing an energy function with respect to the face non-face binary variable and the continuous pose parameters. The system is trained to minimize a loss function that drives correct combinations of labels and pose to be associated with lower energy values than incorrect ones. The system is designed to handle very large range of poses without retraining. The performance of the system was tested on three standard data sets---for frontal views, rotated faces, and profiles---is comparable to previous systems that are designed to handle a single one of these data sets. We show that a system trained simuiltaneously for detection and pose estimation is more accurate on both tasks than similar systems trained for each task separately." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
Recent years have witnessed the advance of CNN based face detectors. CCF @cite_6 uses boosting on top of CNN features for face detection. @cite_38 fine-tune CNN model trained on 1k ImageNet classification task for face and non-face classification task. Faceness @cite_14 trains a series of CNNs for facial attribute recognition to detect partially occluded faces. CascadeCNN @cite_37 develops a cascade architecture built on CNNs with powerful discriminative capability and high performance. @cite_0 propose to jointly train CascadeCNN to realize end-to-end optimization. Similar to @cite_45 , MTCNN @cite_20 proposes a multi-task cascaded CNNs based framework for joint face detection and alignment. UnitBox @cite_32 introduces a new intersection-over-union loss function. CMS-RCNN @cite_24 uses Faster R-CNN in face detection with body contextual information. Convnet @cite_25 integrates CNN with 3D face model in an end-to-end multi-task learning framework. STN @cite_9 proposes a new supervised transformer network and a ROI convolution for face detection.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_14", "@cite_9", "@cite_32", "@cite_6", "@cite_0", "@cite_24", "@cite_45", "@cite_25", "@cite_20" ], "mid": [ "1970456555", "1934410531", "2209882149", "2495387757", "2504335775", "345900524", "2473640056", "2432917172", "204612701", "2417750831", "2341528187" ], "abstract": [ "In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between distribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed method's performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method [23] by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.", "Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image.", "In present object detection systems, the deep convolutional neural networks (CNNs) are utilized to predict bounding boxes of object candidates, and have gained performance advantages over the traditional region proposal methods. However, existing deep CNN methods assume the object bounds to be four independent variables, which could be regressed by the l2 loss separately. Such an oversimplified assumption is contrary to the well-received observation, that those variables are correlated, resulting to less accurate localization. To address the issue, we firstly introduce a novel Intersection over Union (IoU) loss function for bounding box prediction, which regresses the four bounds of a predicted box as a whole unit. By taking the advantages of IoU loss and deep fully convolutional networks, the UnitBox is introduced, which performs accurate and efficient localization, shows robust to objects of varied shapes and scales, and converges fast. We apply UnitBox on face detection task and achieve the best performance among all published methods on the FDDB benchmark.", "Deep learning methods are powerful tools but often suffer from expensive computation and limited flexibility. An alternative is to combine light-weight models with deep representations. As successful cases exist in several visual problems, a unified framework is absent. In this paper, we revisit two widely used approaches in computer vision, namely filtered channel features and Convolutional Neural Networks (CNN), and absorb merits from both by proposing an integrated method called Convolutional Channel Features (CCF). CCF transfers low-level features from pre-trained CNN models to feed the boosting forest model. With the combination of CNN features and boosting forest, CCF benefits from the richer capacity in feature representation compared with channel features, as well as lower cost in computation and storage compared with end-to-end CNN methods. We show that CCF serves as a good way of tailoring pre-trained CNN models to diverse tasks without fine-tuning the whole network to each task by achieving state-of-the-art performances in pedestrian detection, face detection, edge detection and object proposal generation.", "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.", "Robust face detection in the wild is one of the ultimate components to support various facial related problems, i.e., unconstrained face recognition, facial periocular recognition, facial landmarking and pose estimation, facial expression recognition, 3D facial model construction, etc. Although the face detection problem has been intensely studied for decades with various commercial applications, it still meets problems in some real-world scenarios due to numerous challenges, e.g., heavy facial occlusions, extremely low resolutions, strong illumination, exceptional pose variations, image or video compression artifacts, etc. In this paper, we present a face detection approach named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN) to robustly solve the problems mentioned above. Similar to the region-based CNNs, our proposed network consists of the region proposal component and the region-of-interest (RoI) detection component. However, far apart of that network, there are two main contributions in our proposed network that play a significant role to achieve the state-of-the-art performance in face detection. First, the multi-scale information is grouped both in region proposal and RoI detection to deal with tiny face regions. Second, our proposed network allows explicit body contextual reasoning in the network inspired from the intuition of human vision system. The proposed approach is benchmarked on two recent challenging face detection databases, i.e., the WIDER FACE Dataset which contains high degree of variability, as well as the Face Detection Dataset and Benchmark (FDDB). The experimental results show that our proposed approach trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE Dataset by a large margin, and consistently achieves competitive results on FDDB against the recent state-of-the-art face detection methods.", "We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.", "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face proposal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by pruning and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state-of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of predefined anchor boxes in the region proposals network (RPN) by exploiting a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth (l_1 )-losses of both the facial key-points and the face bounding boxes. In experiments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.", "Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
The research on image captioning has proceeded along three different dimensions: template-based methods @cite_28 @cite_26 @cite_16 , search-based approaches @cite_24 @cite_19 @cite_3 , and language-based models @cite_10 @cite_6 @cite_14 @cite_9 @cite_0 @cite_13 @cite_12 .
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_14", "@cite_28", "@cite_9", "@cite_3", "@cite_6", "@cite_24", "@cite_19", "@cite_0", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2950477994", "8316075", "2951912364", "1969616664", "2404394533", "2109586012", "2171361956", "2952782394", "1897761818", "2950178297", "1858383477", "2951183276", "2953022248" ], "abstract": [ "Automatically describing an image with a natural language has been an emerging challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner. To incorporate attributes, we construct variants of architectures by feeding image representations and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework achieves superior results when compared to state-of-the-art deep models. Most remarkably, we obtain METEOR CIDEr-D of 25.2 98.6 on testing data of widely used and publicly available splits in (Karpathy & Fei-Fei, 2015) when extracting image representations by GoogleNet and achieve to date top-1 performance on COCO captioning Leaderboard.", "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.", "Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems.", "We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.", "We introduce two multimodal neural language models: models of natural language that can be conditioned on other modalities. An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. We show that in the case of image-text modelling we can jointly learn word representations and image features by training our models together with a convolutional network. Unlike many of the existing methods, our approach can generate sentence descriptions for images without the use of templates, structured prediction, and or syntactic trees. While we focus on imagetext modelling, our algorithms can be easily applied to other modalities such as audio.", "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-of-the-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.", "Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
Template-based methods predefine the template for sentence generation and split sentence into several parts (e.g., subject, verb, and object). With such sentence fragments, many works align each part with visual content (e.g., CRF in @cite_28 and HMM in @cite_16 ) and then generate the sentence for the image. Obviously, most of them highly depend on the templates of sentence and always generate sentence with syntactical structure. Search-based approaches @cite_24 @cite_19 @cite_3 generate" sentence for an image by selecting the most semantically similar sentences from sentence pool. This direction indeed can achieve human-level descriptions as all the output sentences are from existing human-generated ones. The need to collect human-generated sentences, however, makes the sentence pool hard to be scaled up.
{ "cite_N": [ "@cite_28", "@cite_3", "@cite_24", "@cite_19", "@cite_16" ], "mid": [ "1969616664", "2109586012", "2952782394", "1897761818", "1858383477" ], "abstract": [ "We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.", "We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.", "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-of-the-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.", "Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.", "We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
Different from template-based and search-based models, language-based models aim to learn the probability distribution in the common space of visual content and textual sentence to generate novel sentences with more flexible syntactical structures. In this direction, recent works explore such probability distribution mainly using neural networks and have achieved promising results for image captioning task. Kiros @cite_6 employ the neural networks to generate sentence for an image by proposing a multimodal log-bilinear neural language model. In @cite_14 , Vinyals propose an end-to-end neural networks architecture by utilizing LSTM to generate sentence for an image, which is further incorporated with attention mechanism in @cite_0 to automatically focus on salient objects when generating corresponding words. More recently, in @cite_9 , high-level concepts attributes are shown to obtain clear improvements on image captioning task when injected into existing state-of-the-art RNN-based model. Such high-level attributes are further utilized as semantic attention in @cite_12 and complementary representations to visual features in @cite_27 @cite_13 to enhance image video captioning.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_6", "@cite_0", "@cite_27", "@cite_13", "@cite_12" ], "mid": [ "2951912364", "2404394533", "2171361956", "2950178297", "2951159095", "2950477994", "2953022248" ], "abstract": [ "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems.", "We introduce two multimodal neural language models: models of natural language that can be conditioned on other modalities. An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. We show that in the case of image-text modelling we can jointly learn word representations and image features by training our models together with a convolutional network. Unlike many of the existing methods, our approach can generate sentence descriptions for images without the use of templates, structured prediction, and or syntactic trees. While we focus on imagetext modelling, our algorithms can be easily applied to other modalities such as audio.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD.", "Automatically describing an image with a natural language has been an emerging challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner. To incorporate attributes, we construct variants of architectures by feeding image representations and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework achieves superior results when compared to state-of-the-art deep models. Most remarkably, we obtain METEOR CIDEr-D of 25.2 98.6 on testing data of widely used and publicly available splits in (Karpathy & Fei-Fei, 2015) when extracting image representations by GoogleNet and achieve to date top-1 performance on COCO captioning Leaderboard.", "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
The novel object captioning is a new problem that has received increasing attention most recently, which leverages additional image-sentence paired data @cite_4 or unpaired image text data @cite_8 @cite_15 to describe novel objects in existing RNN-based image captioning frameworks. @cite_4 is one of the early works that enlarges the original limited word dictionary to describe novel objects by using only a few paired image-sentence data. In particular, a transposed weight sharing scheme is proposed to avoid extensive retraining. In contrast, with the largely available unpaired image text data (e.g., ImageNet and Wikipedia), Hendricks @cite_8 explicitly transfer the knowledge of semantically related objects to compose the descriptions about novel objects in the proposed Deep Compositional Captioner (DCC). The DCC model is further extended to an end-to-end system by simultaneously optimizing the visual recognition network, LSTM-based language model, and image captioning network with different sources in @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_8" ], "mid": [ "2463508871", "2953158660", "2952155606" ], "abstract": [ "Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources -- labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.", "In this paper, we address the task of learning novel visual concepts, and their interactions with other concepts, from a few images with sentence descriptions. Using linguistic context and visual features, our method is able to efficiently hypothesize the semantic meaning of new words and add them to its word dictionary so that they can be used to describe images which contain these novel concepts. Our method has an image captioning module based on m-RNN with several improvements. In particular, we propose a transposed weight sharing scheme, which not only improves performance on image captioning, but also makes the model more suitable for the novel concept learning task. We propose methods to prevent overfitting the new concepts. In addition, three novel concept datasets are constructed for this new task. In the experiments, we show that our method effectively learns novel visual concepts from a few examples without disturbing the previously learned concepts. The project page is this http URL", "While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context." ] }
1708.05340
2748080090
Commercial off the shelf (COTS) 3D scanners are capable of generating point clouds covering visible portions of a face with sub-millimeter accuracy at close range, but lack the coverage and specialized anatomic registration provided by more expensive 3D facial scanners. We demonstrate an effective pipeline for joint alignment of multiple unstructured 3D point clouds and registration to a parameterized 3D model which represents shape variation of the human head. Most algorithms separate the problems of pose estimation and mesh warping, however we propose a new iterative method where these steps are interwoven. Error decreases with each iteration, showing the proposed approach is effective in improving geometry and alignment. The approach described is used to align the NDOff-2007 dataset, which contains 7,358 individual scans at various poses of 396 subjects. The dataset has a number of full profile scans which are correctly aligned and contribute directly to the associated mesh geometry. The dataset in its raw form contains a significant number of mislabeled scans, which are identified and corrected based on alignment error using the proposed algorithm. The average point to surface distance between the aligned scans and the produced geometries is one half millimeter.
The proposed alignment method begins by using sparse localized landmarks, but as dense 3D information is available and real time performance is not necessary, additional steps are taken to refine the initial alignment by registering each scan to the subject-specific mesh geometry. Mesh geometry is computed by finding a set of 3D offsets that express the local difference between the scans and the base mesh. These offsets are used at first to warp a 3DMM using precomputed PCA components. Direct (unconstrained) mesh warping without the PCA model is performed as a final step by a method similar to the one described by Arberg @cite_6 . The major geometry variations are described by the 3DMM warping, while the direct approach is able to account for smaller, finer details not represented by the 3DMM @. The significant warping that occurs using the 3DMM PCA components removes the need for a decreasing stiffness parameter when estimating the direct warping.
{ "cite_N": [ "@cite_6" ], "mid": [ "2168722300" ], "abstract": [ "We show how to extend the ICP framework to nonrigid registration, while retaining the convergence properties of the original algorithm. The resulting optimal step nonrigid ICP framework allows the use of different regularisations, as long as they have an adjustable stiffness parameter. The registration loops over a series of decreasing stiffness weights, and incrementally deforms the template towards the target, recovering the whole range of global and local deformations. To find the optimal deformation for a given stiffness, optimal iterative closest point steps are used. Preliminary correspondences are estimated by a nearest-point search. Then the optimal deformation of the template for these fixed correspondences and the active stiffness is calculated. Afterwards the process continues with new correspondences found by searching from the displaced template vertices. We present an algorithm using a locally affine regularisation which assigns an affine transformation to each vertex and minimises the difference in the transformation of neighbouring vertices. It is shown that for this regularisation the optimal deformation for fixed correspondences and fixed stiffness can be determined exactly and efficiently. The method succeeds for a wide range of initial conditions, and handles missing data robustly. It is compared qualitatively and quantitatively to other algorithms using synthetic examples and real world data." ] }
1708.05286
2749129571
Stance classification determines the attitude, or stance, in a (typically short) text. The task has powerful applications, such as the detection of fake news or the automatic extraction of attitudes toward entities or events in the media. This paper describes a surprisingly simple and efficient classification approach to open stance classification in Twitter, for rumour and veracity classification. The approach profits from a novel set of automatically identifiable problem-specific features, which significantly boost classifier accuracy and achieve above state-of-the-art results on recent benchmark datasets. This calls into question the value of using complex sophisticated models for stance classification without first doing informed feature extraction.
The first study that tackles automatic stance classification is that of . With a dataset containing 10K tweets and using a Bayesian classifier and three types of features categorised as content'', network'' and Twitter specific memes'', the authors achieved an accuracy of 93.5 use a rule-based method and show that it outperforms the approach reported by . enrich the feature sets investigated by earlier studies by features derived from the Linguistic Inquiry and Word Count (LIWC) dictionaries @cite_7 . investigate Gaussian Processes as rumour stance classifier. For the first time the authors also use Brown Clusters to extract the features for each tweet. Unlike researchers above, evalute on the rumour data released by , where they report an accuracy of 67.7 Subsequent work has also tackled stance classification for new, unseen rumours. @cite_23 moved away from the classification of tweets in isolation, focusing instead on Twitter 'conversations' @cite_25 initiated by rumours, as part of the Pheme project @cite_13 . They looked at tree-structured conversations initiated by a rumour and followed by tweets responding to it by supporting, denying, querying or commenting on the rumour.
{ "cite_N": [ "@cite_13", "@cite_23", "@cite_25", "@cite_7" ], "mid": [ "2408389158", "", "2253306907", "2140910804" ], "abstract": [ "PHEME attempts to identify four kinds of false claim in social media and on the web, in real time: rumours, disinformation, misinformation and speculation. This brings challenges in modelling the behaviour of individual users, networks of users and information diffusion. This presentation proposal discusses the issues addressed by the project and the challenges it faces, in this emerging and rapidly-developing domain.", "", "Inspired by a European project, PHEME, that requires the close analysis of Twitter-based conversations in order to look at the spread of rumors via social media, this paper has two objectives. The first of these is to take the analysis of microblogs back to first principles and lay out what microblog analysis should look like as a foundational programme of work. The other is to describe how this is of fundamental relevance to Human-Computer Interaction's interest in grasping the constitution of people's interactions with technology within the social order. Our critical finding is that, despite some surface similarities, Twitter-based conversations are a wholly distinct social phenomenon requiring an independent analysis that treats them as unique phenomena in their own right, rather than as another species of conversation that can be handled within the framework of existing Conversation Analysis. This motivates the argument that Microblog Analysis be established as a foundationally independent programme, examining the organizational characteristics of microblogging from the ground up. We articulate how aspects of this approach have already begun to shape our design activities within the PHEME project.", "We are in the midst of a technological revolution whereby, for the first time, researchers can link daily word use to a broad array of real-world behaviors. This article reviews several computerized text analysis methods and describes how Linguistic Inquiry and Word Count (LIWC) was created and validated. LIWC is a transparent text analysis program that counts words in psychologically meaningful categories. Empirical results using LIWC demonstrate its ability to detect meaning in a wide variety of experimental settings, including to show attentional focus, emotionality, social relationships, thinking styles, and individual differences." ] }
1708.05237
2750317406
This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.
Face detection has attracted extensive research attention in past decades. The milestone work of Viola-Jones @cite_29 uses Haar feature and AdaBoost to train a cascade of face non-face classifiers that achieves a good accuracy with real-time efficiency. After that, lots of works have focused on improving the performance with more sophisticated hand-crafted features @cite_42 @cite_17 @cite_53 @cite_60 and more powerful classifiers @cite_22 @cite_37 . Besides the cascade structure, @cite_55 @cite_10 @cite_62 introduce deformable part models (DPM) into face detection tasks and achieve remarkable performance. However, these methods highly depend on the robustness of hand-crafted features and optimize each component separately, making face detection pipeline sub-optimal.
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_22", "@cite_60", "@cite_53", "@cite_29", "@cite_42", "@cite_55", "@cite_10", "@cite_17" ], "mid": [ "2118373237", "2047508432", "1984525543", "2099355420", "2041497292", "2137401668", "2247274765", "", "2056025798", "" ], "abstract": [ "Training a cascade-based face detector using boosting and Haar features is computationally expensive, often requiring weeks on single CPU machines. The bottleneck is at training and selecting Haar features for a single weak classifier, currently in minutes. Traditional techniques for training a weak classifier usually run in 0(NT log N), with N examples (approximately 10,000), and T features (approximately 40,000). We present a method to train a weak classifier in time 0(Nd2 + T), where d is the number of pixels of the probed image sub-window (usually from 350 to 500), by using only the statistics of the weighted input data. Experimental results revealed a significantly reduced training time of a weak classifier to the order of seconds. In particular, this method suffers very minimal immerse in training time with very large increases in members of Haar features, enjoying a significant gain in accuracy, even with reduced training time.", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "Cascades of boosted ensembles have become popular in the object detection community following their highly successful introduction in the face detector of Viola and Jones. Since then, researchers have sought to improve upon the original approach by incorporating new methods along a variety of axes (e.g. alternative boosting methods, feature sets, etc.). Nevertheless, key decisions about how many hypotheses to include in an ensemble and the appropriate balance of detection and false positive rates in the individual stages are often made by user intervention or by an automatic method that produces unnecessarily slow detectors. We propose a novel method for making these decisions, which exploits the shape of the stage ROC curves in ways that have been previously ignored. The result is a detector that is significantly faster than the one produced by the standard automatic method. When this algorithm is combined with a recycling method for reusing the outputs of early stages in later ones and with a retracing method that inserts new early rejection points in the cascade, the detection speed matches that of the best hand-crafted detector. We also exploit joint distributions over several features in weak learning to improve overall detector accuracy, and explore ways to improve training time by aggressively filtering features.", "We integrate the cascade-of-rejectors approach with the Histograms of Oriented Gradients (HoG) features to achieve a fast and accurate human detection system. The features used in our system are HoGs of variable-size blocks that capture salient features of humans automatically. Using AdaBoost for feature selection, we identify the appropriate set of blocks, from a large set of possible blocks. In our system, we use the integral image representation and a rejection cascade which significantly speed up the computation. For a 320 × 280 image, the system can process 5 to 30 frames per second depending on the density in which we scan the image, while maintaining an accuracy level similar to existing methods.", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes.", "", "This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed.", "" ] }
1708.05237
2750317406
This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.
Recent years have witnessed the advance of CNN-based face detectors. CascadeCNN @cite_49 develops a cascade architecture built on CNNs with powerful discriminative capability and high performance. @cite_0 proposes to jointly train CascadeCNN to realize end-to-end optimization. Faceness @cite_18 trains a series of CNNs for facial attribute recognition to detect partially occluded faces. MTCNN @cite_26 proposes to jointly solve face detection and alignment using several multi-task CNNs. UnitBox @cite_40 introduces a new intersection-over-union loss function.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_0", "@cite_40", "@cite_49" ], "mid": [ "2950557924", "2341528187", "2473640056", "2504335775", "1934410531" ], "abstract": [ "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.", "Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance.", "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.", "In present object detection systems, the deep convolutional neural networks (CNNs) are utilized to predict bounding boxes of object candidates, and have gained performance advantages over the traditional region proposal methods. However, existing deep CNN methods assume the object bounds to be four independent variables, which could be regressed by the l2 loss separately. Such an oversimplified assumption is contrary to the well-received observation, that those variables are correlated, resulting to less accurate localization. To address the issue, we firstly introduce a novel Intersection over Union (IoU) loss function for bounding box prediction, which regresses the four bounds of a predicted box as a whole unit. By taking the advantages of IoU loss and deep fully convolutional networks, the UnitBox is introduced, which performs accurate and efficient localization, shows robust to objects of varied shapes and scales, and converges fast. We apply UnitBox on face detection task and achieve the best performance among all published methods on the FDDB benchmark.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks." ] }
1708.05237
2750317406
This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.
Additionally, face detection has inherited some achievements from generic object detection tasks. @cite_34 applies Faster R-CNN in face detection and achieves promising results. CMS-RCNN @cite_30 uses Faster R-CNN in face detection with body contextual information. Convnet @cite_31 integrates CNN with 3D face model in an end-to-end multi-task learning framework. @cite_41 combines Faster R-CNN with hard negative mining and achieves significant boosts in face detection performance. STN @cite_9 proposes a new supervised transformer network and a ROI convolution with RPN for face detection. @cite_27 presents several effective strategies to improve Faster RCNN for resolving face detection tasks. In this paper, inspired by the RPN in Faster RCNN @cite_4 and the multi-scale mechanism in SSD @cite_45 , we develop a state-of-the-art face detector with real-time speed.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_41", "@cite_9", "@cite_27", "@cite_45", "@cite_31", "@cite_34" ], "mid": [ "2432917172", "2613718673", "2477332545", "", "2585123518", "2193145675", "2417750831", "2438869444" ], "abstract": [ "Robust face detection in the wild is one of the ultimate components to support various facial related problems, i.e., unconstrained face recognition, facial periocular recognition, facial landmarking and pose estimation, facial expression recognition, 3D facial model construction, etc. Although the face detection problem has been intensely studied for decades with various commercial applications, it still meets problems in some real-world scenarios due to numerous challenges, e.g., heavy facial occlusions, extremely low resolutions, strong illumination, exceptional pose variations, image or video compression artifacts, etc. In this paper, we present a face detection approach named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN) to robustly solve the problems mentioned above. Similar to the region-based CNNs, our proposed network consists of the region proposal component and the region-of-interest (RoI) detection component. However, far apart of that network, there are two main contributions in our proposed network that play a significant role to achieve the state-of-the-art performance in face detection. First, the multi-scale information is grouped both in region proposal and RoI detection to deal with tiny face regions. Second, our proposed network allows explicit body contextual reasoning in the network inspired from the intuition of human vision system. The proposed approach is benchmarked on two recent challenging face detection databases, i.e., the WIDER FACE Dataset which contains high degree of variability, as well as the Face Detection Dataset and Benchmark (FDDB). The experimental results show that our proposed approach trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE Dataset by a large margin, and consistently achieves competitive results on FDDB against the recent state-of-the-art face detection methods.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Recently significant performance improvement in face detection was made possible by deeply trained convolutional networks. In this report, a novel approach for training state-of-the-art face detector is described. The key is to exploit the idea of hard negative mining and iteratively update the Faster R-CNN based face detector with the hard negatives harvested from a large set of background examples. We demonstrate that our face detector outperforms state-of-the-art detectors on the FDDB dataset, which is the de facto standard for evaluating face detection algorithms.", "", "Abstract In this paper, we present a new face detection scheme using deep learning and achieve the state-of-the-art detection performance on the well-known FDDB face detection benchmark evaluation. In particular, we improve the state-of-the-art Faster RCNN framework by combining a number of strategies, including feature concatenation, hard negative mining, multi-scale training, model pre-training, and proper calibration of key parameters. As a consequence, the proposed scheme obtained the state-of-the-art face detection performance and was ranked as one of the best models in terms of ROC curves of the published methods on the FDDB benchmark. 1", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face proposal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by pruning and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state-of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of predefined anchor boxes in the region proposals network (RPN) by exploiting a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth (l_1 )-losses of both the facial key-points and the face bounding boxes. In experiments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.", "The Faster R-CNN has recently demonstrated impressive results on various object detection benchmarks. By training a Faster R-CNN model on the large scale WIDER face dataset, we report state-of-the-art results on two widely used face detection benchmarks, FDDB and the recently released IJB-A." ] }
1906.00742
2947283491
Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings. Taking gender-bias as a working example, we propose a debiasing method that preserves non-discriminative gender-related information, while removing stereotypical discriminative gender biases from pre-trained word embeddings. Specifically, we consider four types of information: , , and , which represent the relationship between gender vs. bias, and propose a debiasing method that (a) preserves the gender-related information in feminine and masculine words, (b) preserves the neutrality in gender-neutral words, and (c) removes the biases from stereotypical words. Experimental results on several previously proposed benchmark datasets show that our proposed method can debias pre-trained word embeddings better than existing SoTA methods proposed for debiasing word embeddings while preserving gender-related but non-discriminative information.
proposed Gender-Neutral Global Vectors (GN-GloVe) by adding a constraint to the Global Vectors (GloVe) @cite_12 objective such that the gender-related information is confined to a sub-vector. During optimisation, the squared @math distance between gender-related sub-vectors are maximised, while simultaneously minimising the GloVe objective. GN-GloVe learns gender-debiased word embeddings from scratch from a given corpus, and cannot be used to debias pre-trained word embeddings. Moreover, similar to hard and soft debiasing methods described above, GN-GloVe uses pre-defined lists of feminine, masculine and gender-neutral words and debias words in these lists.
{ "cite_N": [ "@cite_12" ], "mid": [ "2250539671" ], "abstract": [ "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition." ] }
1906.00742
2947283491
Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings. Taking gender-bias as a working example, we propose a debiasing method that preserves non-discriminative gender-related information, while removing stereotypical discriminative gender biases from pre-trained word embeddings. Specifically, we consider four types of information: , , and , which represent the relationship between gender vs. bias, and propose a debiasing method that (a) preserves the gender-related information in feminine and masculine words, (b) preserves the neutrality in gender-neutral words, and (c) removes the biases from stereotypical words. Experimental results on several previously proposed benchmark datasets show that our proposed method can debias pre-trained word embeddings better than existing SoTA methods proposed for debiasing word embeddings while preserving gender-related but non-discriminative information.
Debiasing can be seen as a problem of information related to a attribute such as gender, for which adversarial learning methods @cite_17 @cite_5 @cite_30 have been proposed in the fairness-aware machine learning community @cite_33 . In these approaches, inputs are first encoded, and then two classifiers are trained -- a that uses the encoded input to predict the target NLP task, and a that uses the encoded input to predict the protected attribute. The two classifiers and the encoder is learnt jointly such that the accuracy of the target task predictor is maximised, while minimising the accuracy of the protected-attribute predictor. However, showed that although it is possible to obtain chance-level development-set accuracy for the protected attribute during training, a post-hoc classifier, trained on the encoded inputs can still manage to reach substantially high accuracies for the protected attributes. They conclude that adversarial learning alone does not guarantee invariant representations for the protected attributes.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_33", "@cite_17" ], "mid": [ "2963879260", "2893425640", "2157928966", "2963446520" ], "abstract": [ "", "Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.", "Classification models usually make predictions on the basis of training data. If the training data is biased towards certain groups or classes of objects, e.g., there is racial discrimination towards black people, the learned model will also show discriminatory behavior towards that particular community. This partial attitude of the learned model may lead to biased outcomes when labeling future unlabeled data objects. Often, however, impartial classification results are desired or even required by law for future data objects in spite of having biased training data. In this paper, we tackle this problem by introducing a new classification scheme for learning unbiased models on biased training data. Our method is based on massaging the dataset by making the least intrusive modifications which lead to an unbiased dataset. On this modified dataset we then learn a non-discriminating classifier. The proposed method has been implemented and experimental results on a credit approval dataset show promising results: in all experiments our method is able to reduce the prejudicial behavior for future classification significantly without loosing too much predictive accuracy.", "Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze the optimal equilibrium of such a game and find that it amounts to maximizing the uncertainty of inferring the detrimental factor given the representation while maximizing the certainty of making task-specific predictions. On three benchmark tasks, namely fair and bias-free classification, language-independent generation, and lighting-independent image classification, we show that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance." ] }
1906.00939
2947576064
Prediction of user traffic in cellular networks has attracted profound attention for improving resource utilization. In this paper, we study the problem of network traffic traffic prediction and classification by employing standard machine learning and statistical learning time series prediction methods, including long short-term memory (LSTM) and autoregressive integrated moving average (ARIMA), respectively. We present an extensive experimental evaluation of the designed tools over a real network traffic dataset. Within this analysis, we explore the impact of different parameters to the effectiveness of the predictions. We further extend our analysis to the problem of network traffic classification and prediction of traffic bursts. The results, on the one hand, demonstrate superior performance of LSTM over ARIMA in general, especially when the length of the training time series is high enough, and it is augmented by a wisely-selected set of features. On the other hand, the results shed light on the circumstances in which, ARIMA performs close to the optimal with lower complexity.
Traffic classification has been a hot topic in computer communication networks for more than two decades due to its vastly diverse applications in resource provisioning, billing and service prioritization, and security and anomaly detection @cite_13 @cite_15 . While different statistical and machine learning tools have been used till now for traffic classification, e.g. refer to @cite_21 and references herein, most of these works are dependent upon features which are either not available in encrypted traffic, or cannot be extracted in real time, e.g. port number and payload data @cite_21 @cite_13 . In @cite_7 , classification of traffic using convolutional neural network using 1400 packet-based features as well as network flow features has been investigated for classification of encrypted traffic, which is too complex for a cellular network to be used for each user. Reviewing the state-of-the-art reveals that there is a need for investigation of low-complex scalable cellular traffic classification schemes (i) without looking into the packets, due to encryption and latency, (ii) without analyzing the inter-packet arrival for all packets, due to latency and complexity, and (iii) with as few numbers of features as possible. This research gap is addressed in this work.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_13", "@cite_7" ], "mid": [ "2073089243", "2750674396", "2952806211", "2891570868" ], "abstract": [ "The identification of network applications through observation of associated packet traffic flows is vital to the areas of network management and surveillance. Currently popular methods such as port number and payload-based identification exhibit a number of shortfalls. An alternative is to use machine learning (ML) techniques and identify network applications based on per-flow statistics, derived from payload-independent features such as packet length and inter-arrival time distributions. The performance impact of feature set reduction, using Consistency-based and Correlation-based feature selection, is demonstrated on Naive Bayes, C4.5, Bayesian Network and Naive Bayes Tree algorithms. We then show that it is useful to differentiate algorithms based on computational performance rather than classification accuracy alone, as although classification accuracy between the algorithms is similar, computational performance can differ significantly.", "A network traffic classifier (NTC) is an important part of current network monitoring systems, being its task to infer the network service that is currently used by a communication flow (e.g., HTTP and SIP). The detection is based on a number of features associated with the communication flow, for example, source and destination ports and bytes transmitted per packet. NTC is important, because much information about a current network flow can be learned and anticipated just by knowing its network service (required latency, traffic volume, and possible duration). This is of particular interest for the management and monitoring of Internet of Things (IoT) networks, where NTC will help to segregate traffic and behavior of heterogeneous devices and services. In this paper, we present a new technique for NTC based on a combination of deep learning models that can be used for IoT traffic. We show that a recurrent neural network (RNN) combined with a convolutional neural network (CNN) provides best detection results. The natural domain for a CNN, which is image processing, has been extended to NTC in an easy and natural way. We show that the proposed method provides better detection results than alternative algorithms without requiring any feature engineering, which is usual when applying other models. A complete study is presented on several architectures that integrate a CNN and an RNN, including the impact of the features chosen and the length of the network flows used for training.", "Traffic classification has been studied for two decades and applied to a wide range of applications from QoS provisioning and billing in ISPs to security-related applications in firewalls and intrusion detection systems. Port-based, data packet inspection, and classical machine learning methods have been used extensively in the past, but their accuracy have been declined due to the dramatic changes in the Internet traffic, particularly the increase in encrypted traffic. With the proliferation of deep learning methods, researchers have recently investigated these methods for traffic classification task and reported high accuracy. In this article, we introduce a general framework for deep-learning-based traffic classification. We present commonly used deep learning methods and their application in traffic classification tasks. Then, we discuss open problems and their challenges, as well as opportunities for traffic classification.", "Nowadays, network traffic classification plays an important role in many fields including network management, intrusion detection system, malware detection system, etc. Most of the previous research works concentrate on features extracted in the non-encrypted network traffic. However, these features are not compatible with all kind of traffic characterization. Google's QUIC protocol (Quick UDP Internet Connection protocol) is implemented in many services of Google. Nevertheless, the emergence of this protocol imposes many obstacles for traffic classification due to the reduction of visibility for operators into network traffic, so the port and payload- based traditional methods cannot be applied to identify the QUIC- based services. To address this issue, we proposed a novel technique for traffic classification based on the convolutional neural network which combines the feature extraction and classification phase into one system. The proposed method uses the flow and packet-based features to improve the performance. In comparison with current methods, the proposed method can detect some kind of QUIC-based services such as Google Hangout Chat, Google Hangout Voice Call, YouTube, File transfer and Google play music. Besides, the proposed method can achieve the microaveraging F1-score of 99.24 percent." ] }
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
The literature is rich with research that deals with network issues in smart communities. @cite_3 presented the networking requirements for different smart city applications and additionally presented network architectures for different smart city systems. In @cite_8 , the authors discussed the networking and communications challenges encountered in smart cities. @cite_13 state that deploying wireless sensor networks along with the aggregation network in different locations in the smart city is very costly and consequently propose an infrastructure-less approach in which vehicles equipped with sensors is used to collect data.
{ "cite_N": [ "@cite_13", "@cite_3", "@cite_8" ], "mid": [ "2030100713", "2810743368", "2744079410" ], "abstract": [ "Smart cities are a hot topic nowadays for several reasons. For city managers and citizens a smart city is a concept that should allow providing better services and or with more efficiency. For technicians a smart city represents one of the large-scale deployments of equipment for data capturing, storage and processing. For Internet researchers a smart city can become the first massive installments of the Internet of Things concept. Smart cities are commonly implemented in several parts: the data acquisition and actuation, the aggregation networks that connect sensors actuators with the general purpose network, the platform where data is stored and processed and the application that use the information for informing the city managers and citizens. The most costly parts of a smart city are the sensor actuator and the aggregation network. This cost is due mainly due to the deployment and maintenance of the equipment on the city street. This paper describes the different alternatives for building the aggregation networks and provides a novel approach to facilitate the deployment of a smart city. This new solution has been tested in a real scenario achieving satisfactory results. © 2014 IEEE.", "Significant advancements in various technologies such as Cyber-Physical systems (CPS), Internet of Things (IoT), Wireless Sensor Networks (WSNs), Cloud Computing, and Unmanned Aerial Vehicles (UAVs) have taken place lately. These important advancements have led to their adoption in the smart city model, which is used by many organizations for large cities around the world to significantly enhance and improve the quality of life of the inhabitants, improve the utilization of city resources, and reduce operational costs. However, in order to reach these important objectives, efficient networking and communication protocols are needed in order to provide the necessary coordination and control of the various system components. In this paper, we identify the networking characteristics and requirements of smart city applications and identify the networking protocols that can be used to support the various data traffic flows that are needed between the different components in such applications. In addition, we provide an illustration of networking architectures of selected smart city systems, which include pipeline monitoring and control, smart grid, and smart water systems.", "Integrating the various embedded devices and systems in our environment enables an Internet of Things (IoT) for a smart city. The IoT will generate tremendous amount of data that can be leveraged for safety, efficiency, and infotainment applications and services for city residents. The management of this voluminous data through its lifecycle is fundamental to the realization of smart cities. Therefore, in contrast to existing surveys on smart cities we provide a data-centric perspective, describing the fundamental data management techniques employed to ensure consistency, interoperability, granularity, and reusability of the data generated by the underlying IoT for smart cities. Essentially, the data lifecycle in a smart city is dependent on tightly coupled data management with cross-cutting layers of data security and privacy, and supporting infrastructure. Therefore, we further identify techniques employed for data security and privacy, and discuss the networking and computing technologies that enable smart cities. We highlight the achievements in realizing various aspects of smart cities, present the lessons learned, and identify limitations and research challenges." ] }
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_7 present a system where public and semi-public vehicles are used for transporting data between stations distributed around the city and the main server. @cite_4 introduce the concept of Smart Vehicle as a Service (SVaaS). They predict the future location of the vehicle in order to guarantee a continuous vehicle service in smart cities. In another work @cite_12 , the authors indicate that cars will be the building blocks for future smart cities due to their mobility, communications, and processing capabilities. They propose Car4ICT, an architecture that uses cars as the main ICT resource in a smart city. The authors in @cite_10 propose an algorithm for collecting and forwarding data through vehicles in a multi-hop fashion in smart cities. They proposed a ranking system in which vehicles are ranked based on the connection time between the OBU and the RSU. The authors claim that their ranking system results in a better delivery ratio and decrease the number of replicated messages.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_12", "@cite_7" ], "mid": [ "2787210773", "2787288190", "2512054878", "2568227438" ], "abstract": [ "Efficient and cost effective data collection from smart city sensors through vehicular networks is crucial for many applications, such as travel comfort, safety and urban sensing. Static and mobile sensors data can be gathered through the vehicles that will be used as data mules and, while moving, they will be able to access road side units (RSUs), and then, send the data to a server in the cloud. Therefore, it is important to research how to use opportunistic vehicular networks to forward data packets through each other in a multi-hop fashion until they reach the destination. This paper proposes a novel data forwarding algorithm for urban vehicular networks taking into consideration the rank of each vehicle, which is based on the probability to reach a road side unit. The proposed forwarding algorithm is evaluated in the mOVERS emulator considering different forwarding decisions, such as, no restriction on broadcasting packets to neighboring On-Board Units (OBUs), restriction on broadcasting by the average rank of neighboring OBUs, and the number of hops between source and destination. Results show that, by restricting the broadcast messages in the proposed algorithm, we are able to reduce the network's overhead, therefore increasing the packet delivery ratio between the sensors and the server.", "The Smart City vision is to improve quality of life and efficiency of urban operations and services while meeting economic, social, and environmental needs of its dwellers. Realizing this vision requires cities to make significant investments in all kinds of smart objects. Recently, the concept of smart vehicle has also emerged as a viable solution for various pressing problems such as traffic management, drivers' comfort, road safety and on-demand provisioning services. With the availability of onboard vehicular services, these vehicles will be a constructive key enabler of smart cities. Smart vehicles are capable of sharing and storing digital content, sensing and monitoring its surroundings, and mobilizing on-demand services. However, the provisioning of these services is challenging due to different ownerships, costs, demand levels, and rewards. In this paper, we present the concept of Smart Vehicle as a Service (SVaaS) to provide continuous vehicular services in smart cities. The solution relies on a location prediction mechanism to determine a vehicle's future location. Once a vehicle's predicted location is determined, a Quality of Experience (QoE) based service selection mechanism is used to select services that are needed before the vehicle's arrival. We provide simulation results to show that our approach can adequately establish vehicular services in a timely and efficient manner. It also shows that the number of utilized services have been doubled when prediction and service discovery is applied.", "", "Vehicular Networks implementation has become a necessity in Smart Cities, due to the opportunistic network connectivity it offers within the city, where researches has began to focus more on Vehicular Networks, to improve the control over traffics and road safety inside the city; other researches on Smart Cities are more concerned about the transmission of the data collected by sensors, nodes, devices installed around the city to the back-end servers where they can be processed and analyzed. In this paper, we propose possible alternative network for Smart Cities, to forward informative data stored in stations implemented around the city without the needs of pre-installed infrastructures, by relying on public and semi-public vehicle transports for network connectivity between the stations and the back-end server." ] }