Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
RNN for Solving Perturbed Time-Varying Underdetermined Linear System with Double Bound Limits on Residual Errors and State Variables Neural networks have been generally deemed as important tools to handle kinds of online computing problems in recent decades, which have plenty of applications in science and electronics fields. This paper proposes a novel recurrent neural network (RNN) to handle the perturbed time-varying underdetermined linear system with double bound limits on residual errors and state variables. Beyond that, the bound-limited underdetermined linear system is converted into a time-varying system that consists of linear and nonlinear formulas through constructing a nonnegative time-varying variable. Then, theoretical analyses are conducted to verify the superior convergence performance of the proposed RNN model. Furthermore, numerical experiment results and computer simulations demonstrate the superiority and effectiveness of the proposed RNN model for handling the time-varying underdetermined linear system with double bound limits. Finally, the proposed RNN model is applied to the physically limited PUMA560 robot to show its satisfactory applicabilities.
Li-function activated ZNN with finite-time convergence applied to redundant-manipulator kinematic control via time-varying Jacobian matrix pseudoinversion. This paper presents and investigates the application of Zhang neural network (ZNN) activated by Li function to kinematic control of redundant robot manipulators via time-varying Jacobian matrix pseudoinversion. That is, by using Li activation function and by computing the time-varying pseudoinverse of the Jacobian matrix (of the robot manipulator), the resultant ZNN model is applied to redundant-manipulator kinematic control. Note that there are nine novelties and differences of ZNN from the conventional gradient neural network in the research methodology. More importantly, such a Li-function activated ZNN (LFAZNN) model has the property of finite-time convergence (showing its feasibility to redundant-manipulator kinematic control). Simulation results based on a four-link planar robot manipulator and a PA10 robot manipulator further demonstrate the effectiveness of the presented LFAZNN model, as well as show the LFAZNN application prospect.
Kinematic model to control the end-effector of a continuum robot for multi-axis processing This paper presents a novel kinematic approach for controlling the end-effector of a continuum robot for in-situ repair/inspection in restricted and hazardous environments. Forward and inverse kinematic (IK) models have been developed to control the last segment of the continuum robot for performing multi-axis processing tasks using the last six Degrees of Freedom (DoF). The forward kinematics (FK) is proposed using a combination of Euler angle representation and homogeneous matrices. Due to the redundancy of the system, different constraints are proposed to solve the IK for different cases; therefore, the IK model is solved for bending and direction angles between (-pi/2 to + pi/2) radians. In addition, a novel method to calculate the Jacobian matrix is proposed for this type of hyper-redundant kinematics. The error between the results calculated using the proposed Jacobian algorithm and using the partial derivative equations of the FK map (with respect to linear and angular velocity) is evaluated. The error between the two models is found to be insignificant, thus, the Jacobian is validated as a method of calculating the IK for six DoF.
Optimization-Based Inverse Model of Soft Robots With Contact Handling. This letter presents a physically based algorithm to interactively simulate and control the motion of soft robots interacting with their environment. We use the finite-element method to simulate the nonlinear deformation of the soft structure, its actuators, and surroundings and propose a control method relying on a quadratic optimization to find the inverse of the model. The novelty of this work ...
A Finite-Time Convergent and Noise-Rejection Recurrent Neural Network and Its Discretization for Dynamic Nonlinear Equations Solving. The so-called zeroing neural network (ZNN) is an effective recurrent neural network for solving dynamic problems including the dynamic nonlinear equations. There exist numerous unperturbed ZNN models that can converge to the theoretical solution of solvable nonlinear equations in infinity long or finite time. However, when these ZNN models are perturbed by external disturbances, the convergence pe...
A New Inequality-Based Obstacle-Avoidance MVN Scheme and Its Application to Redundant Robot Manipulators This paper proposes a new inequality-based criterion/constraint with its algorithmic and computational details for obstacle avoidance of redundant robot manipulators. By incorporating such a dynamically updated inequality constraint and the joint physical constraints (such as joint-angle limits and joint-velocity limits), a novel minimum-velocity-norm (MVN) scheme is presented and investigated for robotic redundancy resolution. The resultant obstacle-avoidance MVN scheme resolved at the joint-velocity level is further reformulated as a general quadratic program (QP). Two QP solvers, i.e., a simplified primal–dual neural network based on linear variational inequalities (LVI) and an LVI-based numerical algorithm, are developed and applied for online solution of the QP problem as well as the inequality-based obstacle-avoidance MVN scheme. Simulative results that are based on PA10 robot manipulator and a six-link planar robot manipulator in the presence of window-shaped and point obstacles demonstrate the efficacy and superiority of the proposed obstacle-avoidance MVN scheme. Moreover, experimental results of the proposed MVN scheme implemented on the practical six-link planar robot manipulator substantiate the physical realizability and effectiveness of such a scheme for obstacle avoidance of redundant robot manipulator.
Event-Triggered Finite-Time Control for Networked Switched Linear Systems With Asynchronous Switching. This paper is concerned with the event-triggered finite-time control problem for networked switched linear systems by using an asynchronous switching scheme. Not only the problem of finite-time boundedness, but also the problem of input-output finite-time stability is considered in this paper. Compared with the existing event-triggered results of the switched systems, a new type of event-triggered...
Tabu Search - Part I
Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider a MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources􀀀the transmit precoding matrices of the MUs􀀀and the computational resources􀀀the CPU cycles/second assigned by the cloud to each MU􀀀in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.
Symbolic model checking for real-time systems We describe finite-state programs over real-numbered time in a guarded-command language with real-valued clocks or, equivalently, as finite automata with real-valued clocks. Model checking answers the question which states of a real-time program satisfy a branching-time specification (given in an extension of CTL with clock variables). We develop an algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space. For this purpose, we introduce a μ-calculus on computation trees over real-numbered time. Unfortunately, many standard program properties, such as response for all nonzeno execution sequences (during which time diverges), cannot be characterized by fixpoints: we show that the expressiveness of the timed μ-calculus is incomparable to the expressiveness of timed CTL. Fortunately, this result does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values. All timed CTL properties of such programs are shown to be computable as finitely approximable fixpoints in a simple decidable theory.
The industrial indoor channel: large-scale and temporal fading at 900, 2400, and 5200 MHz In this paper, large-scale fading and temporal fading characteristics of the industrial radio channel at 900, 2400, and 5200 MHz are determined. In contrast to measurements performed in houses and in office buildings, few attempts have been made until now to model propagation in industrial environments. In this paper, the industrial environment is categorized into different topographies. Industrial topographies are defined separately for large-scale and temporal fading, and their definition is based upon the specific physical characteristics of the local surroundings affecting both types of fading. Large-scale fading is well expressed by a one-slope path-loss model and excellent agreement with a lognormal distribution is obtained. Temporal fading is found to be Ricean and Ricean K-factors have been determined. Ricean K-factors are found to follow a lognormal distribution.
Cost-Effective Authentic and Anonymous Data Sharing with Forward Security Data sharing has never been easier with the advances of cloud computing, and an accurate analysis on the shared data provides an array of benefits to both the society and individuals. Data sharing with a large number of participants must take into account several issues, including efficiency, data integrity and privacy of data owner. Ring signature is a promising candidate to construct an anonymous and authentic data sharing system. It allows a data owner to anonymously authenticate his data which can be put into the cloud for storage or analysis purpose. Yet the costly certificate verification in the traditional public key infrastructure (PKI) setting becomes a bottleneck for this solution to be scalable. Identity-based (ID-based) ring signature, which eliminates the process of certificate verification, can be used instead. In this paper, we further enhance the security of ID-based ring signature by providing forward security: If a secret key of any user has been compromised, all previous generated signatures that include this user still remain valid. This property is especially important to any large scale data sharing system, as it is impossible to ask all data owners to reauthenticate their data even if a secret key of one single user has been compromised. We provide a concrete and efficient instantiation of our scheme, prove its security and provide an implementation to show its practicality.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.1
0.018182
0
0
0
0
0
0
0
An Attention-Based Digraph Convolution Network Enabled Framework for Congestion Recognition in Three-Dimensional Road Networks Congestion recognition is necessary for vehicle routing, traffic control, and many other applications in intelligent transportation systems. Besides, traffic facilities in the three-dimensional road network, which contains the fundamental spatiotemporal features for congestion recognition, provides multi-source traffic information. To exploit these traffic big data, in this paper, we propose an attention mechanism-based digraph convolution network (ADGCN) enabled framework to tackle the congestion recognition problem. It can be divided into two parts, spatial relevance modeling and temporal relevance modeling. At first, the representation incorporates spatiotemporal traffic information with the three-dimensional urban network, and partially decouples the global network topology to a single-knot digraph. Then a digraph-based convolution network is used to capture high-order spatial features. Finally, to proceed with time-series features, the multi-modal attention mechanism is introduced to catch the long-range temporal dependence and the congestion classifier is defined accordingly. This distinguishes the proposed model from the conventional congestion recognition methods. Comprehensive experiments are conducted based on real traffic data. The results demonstrate the advantages of the proposed framework over the existing spatiotemporal analysis methods.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these "Sybil attacks" is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Fast, Fair, and Efficient Flows in Networks We study the problem of minimizing the maximum latency of flows in networks with congestion. We show that this problem is NP-hard, even when all arc latency functions are linear and there is a single source and sink. Still, an optimal flow and an equilibrium flow share a desirable property in this situation: All flow-carrying paths have the same length, i.e., these solutions are “fair,” which is in general not true for optimal flows in networks with nonlinear latency functions. In addition, the maximum latency of the Nash equilibrium, which can be computed efficiently, is within a constant factor of that of an optimal solution. That is, the so-called price of anarchy is bounded. In contrast, we present a family of instances with multiple sources and a single sink for which the price of anarchy is unbounded, even in networks with linear latencies. Furthermore, we show that an s-t-flow that is optimal with respect to the average latency objective is near-optimal for the maximum latency objective, and it is close to being fair. Conversely, the average latency of a flow minimizing the maximum latency is also within a constant factor of that of a flow minimizing the average latency.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Cost of not splitting in routing: characterization and estimation This paper studies the performance difference of joint routing and congestion control when either single-path routes or multipath routes are used. Our performance metric is the total utility achieved by jointly optimizing transmission rates using congestion control and paths using source routing. In general, this performance difference is strictly positive and hard to determine--in fact an NP-hard problem. To better estimate this performance gap, we develop analytical bounds to this "cost of not splitting" in routing. We prove that the number of paths needed for optimal multipath routing differs from that of optimal single-path routing by no more than the number of links in the network. We provide a general bound on the performance loss, which is independent of the number of source-destination pairs when the latter is larger than the number of links in a network. We also propose a vertex projection method and combine it with a greedy branch-and-bound algorithm to provide progressively tighter bounds on the performance loss. Numerical examples are used to show the effectiveness of our approximation technique and estimation algorithms.
Timely Wireless Flows With General Traffic Patterns: Capacity Region and Scheduling Algorithms. Most existing wireless networking solutions are best-effort and do not provide any delay guarantee required by important applications, such as mobile multimedia conferencing and real-time control of cyber-physical systems. Recently, Hou and Kumar provided a novel framework for analyzing and designing delay-guaranteed wireless networking solutions. While inspiring, their idle-time-based analysis ap...
On the Age of Information in a CSMA Environment In this paper, we investigate a network where <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$N$ </tex-math></inline-formula> links contend for the channel using the well-known carrier sense multiple access scheme. By leveraging the notion of stochastic hybrid systems, we find: 1) a closed-form expression of the average age when links generate packets at will 2) an upperbound of the average age when packets arrive stochastically to each link. This upperbound is shown to be generally tight, and to be equal to the average age in certain scenarios. Armed with these expressions, we formulate the problem of minimizing the average age by calibrating the back-off time of each link. Interestingly, we show that the minimum average age is achieved for the same back-off time in both the sampling and stochastic arrivals scenarios. Then, by analyzing its structure, we convert the formulated optimization problem to an equivalent convex problem that we find its optimal solution. Insights on the interaction between links and numerical implementations of the optimized Carrier Sense Multiple Access (CSMA) scheme in an IEEE 802.11 environment are presented. Next, to further improve the performance of the optimized CSMA scheme, we propose a modification to it by giving each link the freedom to transition to SLEEP mode. The proposed approach provides a way to reduce the burden on the channel when possible. This leads, as will be shown in the paper, to an improvement in the performance of the network. Simulations results are then laid out to highlight the performance gain offered by our approach in comparison to the optimized standard CSMA scheme.
Age-Minimal Transmission for Energy Harvesting Sensors With Finite Batteries: Online Policies An energy-harvesting sensor node that is sending status updates to a destination is considered. The sensor is equipped with a battery of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">finite</italic> size to save its incoming energy, and consumes one unit of energy per status update transmission, which is delivered to the destination instantly over an error-free channel. The setting is <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">online</italic> in which the harvested energy is revealed to the sensor causally over time after it arrives, and the goal is to design status update transmission times (policy) such that the long term average <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">age of information</italic> (AoI) is minimized. The AoI is defined as the time elapsed since the latest update has reached at the destination. Two energy arrival models are considered: a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">random battery recharge</italic> (RBR) model, and an <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">incremental battery recharge</italic> (IBR) model. In both models, energy arrives according to a Poisson process with unit rate, with values that completely fill up the battery in the RBR model, and with values that fill up the battery incrementally in a unit-by-unit fashion in the IBR model. The key approach to characterizing the optimal status update policy for both models is showing the optimality of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">renewal policies</italic> , in which the inter-update times follow a renewal process in a certain manner that depends on the energy arrival model and the battery size. It is then shown that the optimal renewal policy has an energy-dependent <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">threshold</italic> structure, in which the sensor sends a status update only if the AoI grows above a certain threshold that depends on the energy available in its battery. For both the random and the incremental battery recharge models, the optimal energy-dependent thresholds are characterized <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">explicitly</italic> , i.e., in closed-form, in terms of the optimal long term average AoI. It is also shown that the optimal thresholds are monotonically decreasing in the energy available in the battery, and that the smallest threshold, which comes in effect when the battery is full, is equal to the optimal long term average AoI.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
A communication robot in a shopping mall This paper reports our development of a communication robot for use in a shopping mall to provide shopping information, offer route guidance, and build rapport. In the development, the major difficulties included sensing human behaviors, conversation in a noisy daily environment, and the needs of unexpected miscellaneous knowledge in the conversation. We chose a networkrobot system approach, where a single robot's poor sensing capability and knowledge are supplemented by ubiquitous sensors and a human operator. The developed robot system detects a person with floor sensors to initiate interaction, identifies individuals with radio-frequency identification (RFID) tags, gives shopping information while chatting, and provides route guidance with deictic gestures. The robotwas partially teleoperated to avoid the difficulty of speech recognition as well as to furnish a new kind of knowledge that only humans can flexibly provide. The information supplied by a human operator was later used to increase the robot's autonomy. For 25 days in a shopping mall, we conducted a field trial and gathered 2642 interactions. A total of 235 participants signed up to use RFID tags and, later, provided questionnaire responses. The questionnaire results are promising in terms of the visitors' perceived acceptability as well as the encouragement of their shopping activities. The results of the teleoperation analysis revealed that the amount of teleoperation gradually decreased, which is also promising.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Exploring a Surface Using RFID Grid and Group of Mobile Robots. The paper deals with discovering a surface covered with a grid of RFID transponders using a group of robots and a master control unit. The robots move across the surface, read data from the transponders and send it to the master. The master collects the data, analyze it to create a map and sends commands to the robots. This way optimization of robot movements is possible to speed up the discovery. Two types of RFID grid have been considered: square- and triangle-based. A laboratory prototype has been created with class 2.0 robots and the master unit running CPDev SFC program under Windows IoT.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Problem of dynamic change of tags location in anticollision RFID systems Presently the necessity of building anticollision RFID systems with dynamic location change of tags appear more often. Such solutions are used in identification of moving cars, trains (automatic identification of vehicles – AVI processes) as well as moving parts and elements in industry, commerce, science and medicine (internet of things). In the paper there were presented operation stages in the RFID anticollision system necessary to communicate with groups of tags entering and leaving read/write device interrogation zone and communication phases in conditions of dynamic location change of tags. The mentioned aspects influence RFID system reliability, which is characterized by the efficiency coefficient and the identification probability of objects in specific interrogation zone. The communication conditions of correct operation of multiple RFID system are crucial for efficient exchange of data with all tags during their dynamic location changes. Presented problem will be the base to specify new application tag parameters (such as maximum speed of tag motion) and synthesis of interrogation zone required for concrete anticollision RFID applications with dynamic location change of tags.
Robot Localization via Passive UHF-RFID Technology: State-of-the-Art and Challenges This paper presents a state-of-the-art analysis on the current methods for robot localization based on the passive UHF-RFID technology. The state-of-the-art analysis describes the main features and challenges of several localization methods. Then, a first experimental analysis related to a novel phase-based robot localization method is presented. The robot on-board reader collects phase data from a set of passive reference tags during its motion, so resembling to a synthetic array. Then, the phase data are combined with information acquired by low-cost kinematic sensors, through a Sensor Fusion approach. The experimental results show that centimetre order localization errors can be achieved in a typical office indoor scenario by employing a few reference tags.
A standalone RFID Indoor Positioning System Using Passive Tags Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is ba...
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices. Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Parameter tuning for configuring and analyzing evolutionary algorithms In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a three-tier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instances, parameters, and EA performance measures as major factors, and discuss how tuning can be directed to algorithm performance and/or robustness. For the survey part we establish different taxonomies to categorize tuning methods and review existing work. Finally, we elaborate on how tuning can improve methodology by facilitating well-funded experimental comparisons and algorithm analysis.
Cyber warfare: steganography vs. steganalysis For every clever method and tool being developed to hide information in multimedia data, an equal number of clever methods and tools are being developed to detect and reveal its secrets.
Efficient and reliable low-power backscatter networks There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors. This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.
Internet of Things for Smart Cities The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
A Novel Blind Watermarking Approach For Medical Image Authentication Using Mineigen Value Features Developing new watermarking approaches that consider special features of medical images become increasingly necessary. This paper proposes a new watermarking approach to ensure medical images authenticity, using MinEigen value features, chaotic sequence, and Quantization Index Modulation (QIM) in the spatial domain. The idea is to choose the 3 x 3 non overlapping blocks around MinEigen values points, then embed the watermark bits in these blocks using a novel blind way based on chaotic sequence and QIM. The proposed technique is purely blind and fast in terms of execution time. Experimental results demonstrate that the proposed approach is robust against all DICOM JPEG compression attacks while keeping high imperceptibility.
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
Genetic Optimization Of Radial Basis Probabilistic Neural Networks This paper discusses using genetic algorithms (CA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed CA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed CA is also suitable for optimizing the radial basis function neural networks (RBFNN).
Current status and key issues in image steganography: A survey. Steganography and steganalysis are the prominent research fields in information hiding paradigm. Steganography is the science of invisible communication while steganalysis is the detection of steganography. Steganography means “covered writing” that hides the existence of the message itself. Digital steganography provides potential for private and secure communication that has become the necessity of most of the applications in today’s world. Various multimedia carriers such as audio, text, video, image can act as cover media to carry secret information. In this paper, we have focused only on image steganography. This article provides a review of fundamental concepts, evaluation measures and security aspects of steganography system, various spatial and transform domain embedding schemes. In addition, image quality metrics that can be used for evaluation of stego images and cover selection measures that provide additional security to embedding scheme are also highlighted. Current research trends and directions to improve on existing methods are suggested.
Hybrid local and global descriptor enhanced with colour information. Feature extraction is one of the most important steps in computer vision tasks such as object recognition, image retrieval and image classification. It describes an image by a set of descriptors where the best one gives a high quality description and a low computation. In this study, the authors propose a novel descriptor called histogram of local and global features using speeded up robust featur...
Secure visual cryptography for medical image using modified cuckoo search. Optimal secure visual cryptography for brain MRI medical image is proposed in this paper. Initially, the brain MRI images are selected and then discrete wavelet transform is applied to the brain MRI image for partitioning the image into blocks. Then Gaussian based cuckoo search algorithm is utilized to select the optimal position for every block. Next the proposed technique creates the dual shares from the secret image. Then the secret shares are embedded in the corresponding positions of the blocks. After embedding, the extraction operation is carried out. Here visual cryptographic design is used for the purpose of image authentication and verification. The extracted secret image has dual shares, based on that the receiver views the input image. The authentication and verification of medical image are assisted with the help of target database. All the secret images are registered previously in the target database. The performance of the proposed method is estimated by Peak Signal to Noise Ratio (PSNR), Mean square error (MSE) and normalized correlation. The implementation is done by MATLAB platform.
Digital watermarking techniques for image security: a review Multimedia technology usages is increasing day by day and to provide authorized data and protecting the secret information from unauthorized use is highly difficult and involves a complex process. By using the watermarking technique, only authorized user can use the data. Digital watermarking is a widely used technology for the protection of digital data. Digital watermarking deals with the embedding of secret data into actual information. Digital watermarking techniques are classified into three major categories, and they were based on domain, type of document (text, image, music or video) and human perception. Performance of the watermarked images is analysed using Peak signal to noise ratio, mean square error and bit error rate. Watermarking of images has been researched profoundly for its specialized and modern achievability in all media applications such as copyrights protection, medical reports (MRI scan and X-ray), annotation and privacy control. This paper reviews the watermarking technique and its merits and demerits.
A New Efficient Medical Image Cipher Based On Hybrid Chaotic Map And Dna Code In this paper, we propose a novel medical image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and a new hybrid chaotic map. Our study uses DNA sequences and operations and the chaotic hybrid map to strengthen the cryptosystem. The significant advantages of this approach consist in improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext, has a large key space and a high key sensitivity, and is able to resist against the statistical and exhaustive attacks.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Assist-As-Needed Training Paradigms For Robotic Rehabilitation Of Spinal Cord Injuries This paper introduces a new "assist-as-needed" (AAN) training paradigm for rehabilitation of spinal cord injuries via robotic training devices. In the pilot study reported in this paper, nine female adult Swiss-Webster mice were divided into three groups, each experiencing a different robotic training control strategy: a fixed training trajectory (Fixed Group, A), an AAN training method without interlimb coordination (Band Group, B), and an AAN training method with bilateral hindlimb coordination (Window Group, C). Fourteen days after complete transection at the mid-thoracic level, the mice were robotically trained to step in the presence of an acutely administered serotonin agonist, quipazine, for a period of six weeks. The mice that received AAN training (Groups B and C) show higher levels of recovery than Group A mice, as measured by the number, consistency, and periodicity of steps realized during testing sessions. Group C displays a higher incidence of alternating stepping than Group B. These results indicate that this training approach may be more effective than fixed trajectory paradigms in promoting robust post-injury stepping behavior. Furthermore, the constraint of interlimb coordination appears to be an important contribution to successful training.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
Anomaly-based intrusion detection system using multi-objective grey wolf optimisation algorithm The rapid development of information technology leads to increasing the number of devices connected to the Internet. Besides, the amount of network attacks also increased. Accordingly, there is an urgent demand to design a defence system proficient in discovering new kinds of attacks. One of the most effective protection systems is intrusion detection system (IDS). The IDS is an intelligent system that monitors and inspects the network packets to identify the abnormal behavior. In addition, the network packets comprise many attributes and there are many attributes that are irrelevant and repetitive which degrade the performance of the IDS system and overwhelm the system resources. A feature selection technique helps to reduce the computation time and complexity by selecting the optimum subset of features. In this paper, an enhanced anomaly-based IDS model based on multi-objective grey wolf optimisation (GWO) algorithm was proposed. The GWO algorithm was employed as a feature selection mechanism to identify the most relevant features from the dataset that contribute to high classification accuracy. Furthermore, support vector machine was used to estimate the capability of selected features in predicting the attacks accurately. Moreover, 20% of NSL-KDD dataset was used to demonstrate effectiveness of the proposed approach through different attack scenarios. The experimental result revealed that the proposed approach obtains classification accuracy of (93.64%, 91.01%, 57.72%, 53.7%) for DoS, Probe, R2L, and U2R attack respectively. Finally, the proposed approach was compared with other existing approaches and achieves significant result.
On the History of the Minimum Spanning Tree Problem It is standard practice among authors discussing the minimum spanning tree problem to refer to the work of Kruskal(1956) and Prim (1957) as the sources of the problem and its first efficient solutions, despite the citation by both of Boruvka (1926) as a predecessor. In fact, there are several apparently independent sources and algorithmic solutions of the problem. They have appeared in Czechoslovakia, France, and Poland, going back to the beginning of this century. We shall explore and compare these works and their motivations, and relate them to the most recent advances on the minimum spanning tree problem.
Smart home energy management system using IEEE 802.15.4 and zigbee Wireless personal area network and wireless sensor networks are rapidly gaining popularity, and the IEEE 802.15 Wireless Personal Area Working Group has defined no less than different standards so as to cater to the requirements of different applications. The ubiquitous home network has gained widespread attentions due to its seamless integration into everyday life. This innovative system transparently unifies various home appliances, smart sensors and energy technologies. The smart energy market requires two types of ZigBee networks for device control and energy management. Today, organizations use IEEE 802.15.4 and ZigBee to effectively deliver solutions for a variety of areas including consumer electronic device control, energy management and efficiency, home and commercial building automation as well as industrial plant management. We present the design of a multi-sensing, heating and airconditioning system and actuation application - the home users: a sensor network-based smart light control system for smart home and energy control production. This paper designs smart home device descriptions and standard practices for demand response and load management "Smart Energy" applications needed in a smart energy based residential or light commercial environment. The control application domains included in this initial version are sensing device control, pricing and demand response and load control applications. This paper introduces smart home interfaces and device definitions to allow interoperability among ZigBee devices produced by various manufacturers of electrical equipment, meters, and smart energy enabling products. We introduced the proposed home energy control systems design that provides intelligent services for users and we demonstrate its implementation using a real testbad.
Bee life-based multi constraints multicast routing optimization for vehicular ad hoc networks. A vehicular ad hoc network (VANET) is a subclass of mobile ad hoc networks, considered as one of the most important approach of intelligent transportation systems (ITS). It allows inter-vehicle communication in which their movement is restricted by a VANET mobility model and supported by some roadside base stations as fixed infrastructures. Multicasting provides different traffic information to a limited number of vehicle drivers by a parallel transmission. However, it represents a very important challenge in the application of vehicular ad hoc networks especially, in the case of the network scalability. In the applications of this sensitive field, it is very essential to transmit correct data anywhere and at any time. Consequently, the VANET routing protocols should be adapted appropriately and meet effectively the quality of service (QoS) requirements in an optimized multicast routing. In this paper, we propose a novel bee colony optimization algorithm called bees life algorithm (BLA) applied to solve the quality of service multicast routing problem (QoS-MRP) for vehicular ad hoc networks as NP-Complete problem with multiple constraints. It is considered as swarm-based algorithm which imitates closely the life of the colony. It follows the two important behaviors in the nature of bees which are the reproduction and the food foraging. BLA is applied to solve QoS-MRP with four objectives which are cost, delay, jitter, and bandwidth. It is also submitted to three constraints which are maximum allowed delay, maximum allowed jitter and minimum requested bandwidth. In order to evaluate the performance and the effectiveness of this realized proposal using C++ and integrated at the routing protocol level, a simulation study has been performed using the network simulator (NS2) based on a mobility model of VANET. The comparisons of the experimental results show that the proposed algorithm outperformed in an efficient way genetic algorithm (GA), bees algorithm (BA) and marriage in honey bees optimization (MBO) algorithm as state-of-the-art conventional metaheuristics applied to QoS-MRP problem with the same simulation parameters.
On the Spatiotemporal Traffic Variation in Vehicle Mobility Modeling Several studies have shown the importance of realistic micromobility and macromobility modeling in vehicular ad hoc networks (VANETs). At the macroscopic level, most researchers focus on a detailed and accurate description of road topology. However, a key factor often overlooked is a spatiotemporal configuration of vehicular traffic. This factor greatly influences network topology and topology variations. Indeed, vehicle distribution has high spatial and temporal diversity that depends on the time of the day and place attraction. This diversity impacts the quality of radio links and, thus, network topology. In this paper, we propose a new mobility model for vehicular networks in urban and suburban environments. To reproduce realistic network topology and topological changes, the model uses real static and dynamic data on the environment. The data concern particularly the topographic and socioeconomic characteristics of infrastructures and the spatiotemporal population distribution. We validate our model by comparing the simulation results with real data derived from individual displacement survey. We also present statistics on network topology, which show the interest of taking into account the spatiotemporal mobility variation.
A bio-inspired clustering in mobile adhoc networks for internet of things based on honey bee and genetic algorithm In mobile adhoc networks for internet of things, the size of routing table can be reduced with the help of clustering structure. The dynamic nature of MANETs and its complexity make it a type of network with high topology changes. To reduce the topology maintenance overhead, the cluster based structure may be used. Hence, it is highly desirable to design an algorithm that adopts quickly to topology dynamics and form balanced and stable clusters. In this article, the formulation of clustering problem is carried out initially. Later, an algorithm on the basis of honey bee algorithm, genetic algorithm and tabu search (GBTC) for internet of things is proposed. In this algorithm, the individual (bee) represents a possbile clustering structure and its fitness is evaluated on the basis of its stability and load balancing. A method is presented by merging the properties of honey bee and genetic algorithms to help the population to cope with the topology dynamics and produce top quality solutions that are closely related to each other. The simulation results conducted for validation show that the proposed work forms balance and stable clusters. The simulation results are compared with algorithms that do not consider the dynamic optimization requirements. The GTBC outperform existing algorithms in terms of network lifetime and clustering overhead etc.
An enhanced QoS CBT multicast routing protocol based on Genetic Algorithm in a hybrid HAP-Satellite system A QoS multicast routing scheme based on Genetic Algorithms (GA) heuristic is presented in this paper. Our proposal, called Constrained Cost–Bandwidth–Delay Genetic Algorithm (CCBD-GA), is applied to a multilayer hybrid platform that includes High Altitude Platforms (HAPs) and a Satellite platform. This GA scheme has been compared with another GA well-known in the literature called Multi-Objective Genetic Algorithm (MOGA) in order to show the proposed algorithm goodness. In order to test the efficiency of GA schemes on a multicast routing protocol, these GA schemes are inserted into an enhanced version of the Core-Based Tree (CBT) protocol with QoS support. CBT and GA schemes are tested in a multilayer hybrid HAP and Satellite architecture and interesting results have been discovered. The joint bandwidth–delay metrics can be very useful in hybrid platforms such as that considered, because it is possible to take advantage of the single characteristics of the Satellite and HAP segments. The HAP segment offers low propagation delay permitting QoS constraints based on maximum end-to-end delay to be met. The Satellite segment, instead, offers high bandwidth capacity with higher propagation delay. The joint bandwidth–delay metric permits the balancing of the traffic load respecting both QoS constraints. Simulation results have been evaluated in terms of HAP and Satellite utilization, bandwidth, end-to-end delay, fitness function and cost of the GA schemes.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Task Offloading in Vehicular Edge Computing Networks: A Load-Balancing Solution Recently, the rapid advance of vehicular networks has led to the emergence of diverse delay-sensitive vehicular applications such as automatic driving, auto navigation. Note that existing resource-constrained vehicles cannot adequately meet these demands on low / ultra-low latency. By offloading parts of the vehicles’ compute-intensive tasks to the edge servers in proximity, mobile edge computing is envisioned as a promising paradigm, giving rise to the vehicular edge computing networks (VECNs). However, most existing works on task offloading in VECNs did not take the load balancing of the computation resources at the edge servers into account. To address these issues and given the high dynamics of vehicular networks, we introduce fiber-wireless (FiWi) technology to enhance VECNs, due to its advantages on centralized network management and supporting multiple communication techniques. Aiming to minimize the processing delay of the vehicles’ computation tasks, we propose a software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs, where SDN is introduced to provide supports for the centralized network and vehicle information management. Extensive analysis and numerical results corroborate that our proposed load-balancing scheme can achieve superior performance on processing delay reduction by utilizing the edge servers’ computation resources more efficiently.
A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots Autonomous mobile robots navigating in changing and dynamic unstructured environments like the outdoor environments need to cope with large amounts of uncertainties that are inherent of natural environments. The traditional type-1 fuzzy logic controller (FLC) using precise type-1 fuzzy sets cannot fully handle such uncertainties. A type-2 FLC using type-2 fuzzy sets can handle such uncertainties to produce a better performance. In this paper, we present a novel reactive control architecture for autonomous mobile robots that is based on type-2 FLC to implement the basic navigation behaviors and the coordination between these behaviors to produce a type-2 hierarchical FLC. In our experiments, we implemented this type-2 architecture in different types of mobile robots navigating in indoor and outdoor unstructured and challenging environments. The type-2-based control system dealt with the uncertainties facing mobile robots in unstructured environments and resulted in a very good performance that outperformed the type-1-based control system while achieving a significant rule reduction compared to the type-1 system.
Multi-stage genetic programming: A new strategy to nonlinear system modeling This paper presents a new multi-stage genetic programming (MSGP) strategy for modeling nonlinear systems. The proposed strategy is based on incorporating the individual effect of predictor variables and the interactions among them to provide more accurate simulations. According to the MSGP strategy, an efficient formulation for a problem comprises different terms. In the first stage of the MSGP-based analysis, the output variable is formulated in terms of an influencing variable. Thereafter, the error between the actual and the predicted value is formulated in terms of a new variable. Finally, the interaction term is derived by formulating the difference between the actual values and the values predicted by the individually developed terms. The capabilities of MSGP are illustrated by applying it to the formulation of different complex engineering problems. The problems analyzed herein include the following: (i) simulation of pH neutralization process, (ii) prediction of surface roughness in end milling, and (iii) classification of soil liquefaction conditions. The validity of the proposed strategy is confirmed by applying the derived models to the parts of the experimental results that were not included in the analyses. Further, the external validation of the models is verified using several statistical criteria recommended by other researchers. The MSGP-based solutions are capable of effectively simulating the nonlinear behavior of the investigated systems. The results of MSGP are found to be more accurate than those of standard GP and artificial neural network-based models.
Placing Virtual Machines to Optimize Cloud Gaming Experience Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider's total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) fool pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
An improved adaptive NSGA-II with multi-population algorithm The NSGA-II algorithm uses a single population single crossover operator, which limits the search performance of the algorithm to a certain extent. This paper presents an improved version of the NSGA-II algorithm, named adaptive multi-population NSGA-II (AMP-NSGA-II) that divides the original population into multiple populations and assigns a different crossover operator to each subspecies. It introduces an excellent set of solutions (EXS), which can make the individuals in the EXS set close to the Pareto front and improve the convergence performance of the algorithm. And based on the analysis of the EXS set, the size of each subpopulation can be dynamically adjusted, which can improve the adaptability for different problems. Finally, the computation results on benchmark multi-objective problems show that the proposed AMP-NSGA-II algorithm is effective and is competitive to some state-of-the-art multi-objective evolutionary algorithms in the literatureis.
Multi-stage genetic programming: A new strategy to nonlinear system modeling This paper presents a new multi-stage genetic programming (MSGP) strategy for modeling nonlinear systems. The proposed strategy is based on incorporating the individual effect of predictor variables and the interactions among them to provide more accurate simulations. According to the MSGP strategy, an efficient formulation for a problem comprises different terms. In the first stage of the MSGP-based analysis, the output variable is formulated in terms of an influencing variable. Thereafter, the error between the actual and the predicted value is formulated in terms of a new variable. Finally, the interaction term is derived by formulating the difference between the actual values and the values predicted by the individually developed terms. The capabilities of MSGP are illustrated by applying it to the formulation of different complex engineering problems. The problems analyzed herein include the following: (i) simulation of pH neutralization process, (ii) prediction of surface roughness in end milling, and (iii) classification of soil liquefaction conditions. The validity of the proposed strategy is confirmed by applying the derived models to the parts of the experimental results that were not included in the analyses. Further, the external validation of the models is verified using several statistical criteria recommended by other researchers. The MSGP-based solutions are capable of effectively simulating the nonlinear behavior of the investigated systems. The results of MSGP are found to be more accurate than those of standard GP and artificial neural network-based models.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Hybrid Whale Optimization Algorithm with simulated annealing for feature selection. •Four hybrid feature selection methods for classification task are proposed.•Our hybrid method combines Whale Optimization Algorithm with simulated annealing.•Eighteen UCI datasets were used in the experiments.•Our approaches result a higher accuracy by using less number of features.
Solving the dynamic weapon target assignment problem by an improved artificial bee colony algorithm with heuristic factor initialization. •Put forward an improved artificial bee colony algorithm based on ranking selection and elite guidance.•Put forward 4 rule-based heuristic factors: Wc, Rc, TRc and TRcL.•The heuristic factors are used in population initialization to improve the quality of the initial solutions in DWTA solving.•The heuristic factor initialization method is combined with the improved ABC algorithm to solve the DWTA problem.
Self-adaptive mutation differential evolution algorithm based on particle swarm optimization Differential evolution (DE) is an effective evolutionary algorithm for global optimization, and widely applied to solve different optimization problems. However, the convergence speed of DE will be slower in the later stage of the evolution and it is more likely to get stuck at a local optimum. Moreover, the performance of DE is sensitive to its mutation strategies and control parameters. Therefore, a self-adaptive mutation differential evolution algorithm based on particle swarm optimization (DEPSO) is proposed to improve the optimization performance of DE. DEPSO can effectively utilize an improved DE/rand/1 mutation strategy with stronger global exploration ability and PSO mutation strategy with higher convergence ability. As a result, the population diversity can be maintained well in the early stage of the evolution, and the faster convergence speed can be obtained in the later stage of the evolution. The performance of the proposed DEPSO is evaluated on 30-dimensional and 100-dimensional functions. The experimental results indicate that DEPSO can significantly improve the global convergence performance of the conventional DE and thus avoid premature convergence, and its average performance is better than those of the conventional DE, PSO and the compared algorithms. Moreover, DEPSO is applied to solve arrival flights scheduling and the optimization results show that it can optimize the sequence and decrease the delay time.
An improved artificial bee colony algorithm for balancing local and global search behaviors in continuous optimization The artificial bee colony, ABC for short, algorithm is population-based iterative optimization algorithm proposed for solving the optimization problems with continuously-structured solution space. Although ABC has been equipped with powerful global search capability, this capability can cause poor intensification on found solutions and slow convergence problem. The occurrence of these issues is originated from the search equations proposed for employed and onlooker bees, which only updates one decision variable at each trial. In order to address these drawbacks of the basic ABC algorithm, we introduce six search equations for the algorithm and three of them are used by employed bees and the rest of equations are used by onlooker bees. Moreover, each onlooker agent can modify three dimensions or decision variables of a food source at each attempt, which represents a possible solution for the optimization problems. The proposed variant of ABC algorithm is applied to solve basic, CEC2005, CEC2014 and CEC2015 benchmark functions. The obtained results are compared with results of the state-of-art variants of the basic ABC algorithm, artificial algae algorithm, particle swarm optimization algorithm and its variants, gravitation search algorithm and its variants and etc. Comparisons are conducted for measurement of the solution quality, robustness and convergence characteristics of the algorithms. The obtained results and comparisons show the experimentally validation of the proposed ABC variant and success in solving the continuous optimization problems dealt with the study.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning The Internet of vehicles (IoV) is a large information interaction network that collects information on vehicles, roads and pedestrians. One of the important uses of vehicle networks is to meet the entertainment needs of driving users through communication between vehicles and roadside units (RSUs). Due to the limited storage space of RSUs, determining the content cached in each RSU is a key challenge. With the development of 5G and video editing technology, short video systems have become increasingly popular. Current widely used cache update methods, such as partial file precaching and content popularity- and user interest-based determination, are inefficient for such systems. To solve this problem, this paper proposes a QoE-driven edge caching method for the IoV based on deep reinforcement learning. First, a class-based user interest model is established. Compared with the traditional file popularity- and user interest distribution-based cache update methods, the proposed method is more suitable for systems with a large number of small files. Second, a quality of experience (QoE)-driven RSU cache model is established based on the proposed class-based user interest model. Third, a deep reinforcement learning method is designed to address the QoE-driven RSU cache update issue effectively. The experimental results verify the effectiveness of the proposed algorithm.
Image information and visual quality Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.
Stabilization of switched continuous-time systems with all modes unstable via dwell time switching Stabilization of switched systems composed fully of unstable subsystems is one of the most challenging problems in the field of switched systems. In this brief paper, a sufficient condition ensuring the asymptotic stability of switched continuous-time systems with all modes unstable is proposed. The main idea is to exploit the stabilization property of switching behaviors to compensate the state divergence made by unstable modes. Then, by using a discretized Lyapunov function approach, a computable sufficient condition for switched linear systems is proposed in the framework of dwell time; it is shown that the time intervals between two successive switching instants are required to be confined by a pair of upper and lower bounds to guarantee the asymptotic stability. Based on derived results, an algorithm is proposed to compute the stability region of admissible dwell time. A numerical example is proposed to illustrate our approach.
Software-Defined Networking: A Comprehensive Survey The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms - with a focus on aspects such as resiliency, scalability, performance, security, and dependability - as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
On-the-fly (D)DoS attack mitigation in SDN using Deep Neural Network-based rate limiting Software Defined Networking (SDN) has emerged as a promising paradigm offering an unprecedented programmability, scalability and fine-grained control over forwarding elements (FE). Mainly, SDN decouples the forwarding plane from the control plane which is moved to a central controller that is in charge of taking routing decisions in the network. However, SDN is rife with vulnerabilities so that several network attacks, especially Distributed Denial of Service (DDoS), can be launched from compromised hosts connected to switches. DDoS attacks can easily overload the controller processing capacity and flood switch flow-tables. This paper deals with the security issue in SDN. It proposes a real-time protection against DDoS attacks that is based on a controller-side sliding window rate limiting approach which relies on a weighted abstraction of the underlying network. A weight defines the allowable amount of data that can be transmitted by a node and is dynamically updated according to its contribution to: (1) the queueing capacity of the controller, and (2) the number of flow-rules in the switch. Hence, a new deep learning algorithm, denoted the Parallel Online Deep Learning algorithm (PODL), is defined in order to update weights on the-fly according to both aforementioned constraints simultaneously. Furthermore, the behavior of each host and each switch is evaluated through a measure of trustworthiness which is used to penalize mis-behaving ones by prohibiting new flow requests or PacketIn messages for a period of time. Host trustworthiness is based on their weights while switch trustworthiness is achieved through a computation of the Average Nearest-Neighbor Degree (ANND). Realistic experiments show that the proposed solution succeeds in minimizing the impact of DDoS attacks on both the controllers and the switches regarding the PacketIn arrival rate at the controller, the rate of accepted requests and the flow-table usage.
Adaptive Clustering with Feature Ranking for DDoS Attacks Detection Distributed Denial of Service (DDoS) attacks pose an increasing threat to the current internet. The detection of such attacks plays an important role in maintaining the security of networks. In this paper, we propose a novel adaptive clustering method combined with feature ranking for DDoS attacks detection. First, based on the analysis of network traffic, preliminary variables are selected. Second, the Modified Global K-means algorithm (MGKM) is used as the basic incremental clustering algorithm to identify the cluster structure of the target data. Third, the linear correlation coefficient is used for feature ranking. Lastly, the feature ranking result is used to inform and recalculate the clusters. This adaptive process can make worthwhile adjustments to the working feature vector according to different patterns of DDoS attacks, and can improve the quality of the clusters and the effectiveness of the clustering algorithm. The experimental results demonstrate that our method is effective and adaptive in detecting the separate phases of DDoS attacks.
The role of KL divergence in anomaly detection We study the role of Kullback-Leibler divergence in the framework of anomaly detection, where its abilities as a statistic underlying detection have never been investigated in depth. We give an in-principle analysis of network attack detection, showing explicitly attacks may be masked at minimal cost through 'camouflage'. We illustrate on both synthetic distributions and ones taken from real traffic.
An Anomaly Detection Model Based on One-Class SVM to Detect Network Intrusions. Intrusion detection occupies a decision position in solving the network security problems. Support Vector Machines (SVMs) are one of the widely used intrusion detection techniques. However, the commonly used two-class SVM algorithms are facing difficulties of constructing the training dataset. That is because in many real application scenarios, normal connection records are easy to be obtained, but attack records are not so. We propose an anomaly detection model based on One-class SVM to detect network intrusions. The one-class SVM adopts only normal network connection records as the training dataset. But after being trained, it is able to recognize normal from various attacks. This just meets the requirements of the anomaly detection. Experimental results on KDDCUP99 dataset show that compared to Probabilistic Neural Network (PNN) and C-SVM, our anomaly detection model based on One-class SVM achieves higher detection rates and yields average better performance in terms of precision, recall and F-value.
Detection and Mitigation of DoS and DDoS Attacks in IoT-Based Stateful SDN : An Experimental Approach. The expected advent of the Internet of Things (IoT) has triggered a large demand of embedded devices, which envisions the autonomous interaction of sensors and actuators while offering all sort of smart services. However, these IoT devices are limited in computation, storage, and network capacity, which makes them easy to hack and compromise. To achieve secure development of IoT, it is necessary to engineer scalable security solutions optimized for the IoT ecosystem. To this end, Software Defined Networking (SDN) is a promising paradigm that serves as a pillar in the fifth generation of mobile systems (5G) that could help to detect and mitigate Denial of Service (DoS) and Distributed DoS (DDoS) threats. In this work, we propose to experimentally evaluate an entropy-based solution to detect and mitigate DoS and DDoS attacks in IoT scenarios using a stateful SDN data plane. The obtained results demonstrate for the first time the effectiveness of this technique targeting real IoT data traffic.
Machine-Learning-Enabled DDoS Attacks Detection in P4 Programmable Networks Distributed Denial of Service (DDoS) attacks represent a major concern in modern Software Defined Networking (SDN), as SDN controllers are sensitive points of failures in the whole SDN architecture. Recently, research on DDoS attacks detection in SDN has focused on investigation of how to leverage data plane programmability, enabled by P4 language, to detect attacks directly in network switches, with marginal involvement of SDN controllers. In order to effectively address cybersecurity management in SDN architectures, we investigate the potential of Artificial Intelligence and Machine Learning (ML) algorithms to perform automated DDoS Attacks Detection (DAD), specifically focusing on Transmission Control Protocol SYN flood attacks. We compare two different DAD architectures, called Standalone and Correlated DAD, where traffic features collection and attack detection are performed locally at network switches or in a single entity (e.g., in SDN controller), respectively. We combine the capability of ML and P4-enabled data planes to implement real-time DAD. Illustrative numerical results show that, for all tested ML algorithms, accuracy, precision, recall and F1-score are above 98% in most cases, and classification time is in the order of few hundreds of mu s in the worst case. Considering real-time DAD implementation, significant latency reduction is obtained when features are extracted at the data plane by using P4 language.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking
Finite-approximation-error-based discrete-time iterative adaptive dynamic programming. In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The ...
Neural network adaptive tracking control for a class of uncertain switched nonlinear systems. •Study the method of the tracking control of the switched uncertain nonlinear systems under arbitrary switching signal controller.•A multilayer neural network adaptive controller with multilayer weight norm adaptive estimation is been designed.•The adaptive law is expand from calculation the second layer weight of neural network to both of the two layers weight.•The controller proposed improve the tracking error performance of the closed-loop system greatly.
Convert Harm Into Benefit: A Coordination-Learning Based Dynamic Spectrum Anti-Jamming Approach This paper mainly investigates the multi-user anti-jamming spectrum access problem. Using the idea of “converting harm into benefit,” the malicious jamming signals projected by the enemy are utilized by the users as the coordination signals to guide spectrum coordination. An “internal coordination-external confrontation” multi-user anti-jamming access game model is constructed, and the existence of Nash equilibrium (NE) as well as correlated equilibrium (CE) is demonstrated. A coordination-learning based anti-jamming spectrum access algorithm (CLASA) is designed to achieve the CE of the game. Simulation results show the convergence, and effectiveness of the proposed CLASA algorithm, and indicate that our approach can help users confront the malicious jammer, and coordinate internal spectrum access simultaneously without information exchange. Last but not least, the fairness of the proposed approach under different jamming attack patterns is analyzed, which illustrates that this approach provides fair anti-jamming spectrum access opportunities under complicated jamming pattern.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Real-time people movement estimation in large disasters from several kinds of mobile phone data. Recently, an understanding of mass movement in urban areas immediately after large disasters, such as the Great East Japan Earthquake (GEJE), has been needed. In particular, mobile phone data is available as time-varying data. However, much more detailed movement that is based on network flow instead of aggregated data is needed for appropriate rescue on a real-time basis. Hence, our research aims to estimate real-time human movement during large disasters from several kinds of mobile phone data. In this paper, we simulate the movement of people in the Tokyo metropolitan area in a large disaster situation and obtain several kinds of fragmentary movement observation data from mobile phones. Our approach is to use data assimilation techniques combining with simulation of population movement and observation data. The experimental results confirm that the improvement in accuracy depends on the observation data quality using sensitivity analysis and data processing speed to satisfy each condition for real-time estimation.
Higher-order SVD analysis for crowd density estimation This paper proposes a new method to estimate the crowd density based on the combination of higher-order singular value decomposition (HOSVD) and support vector machine (SVM). We first construct a higher-order tensor with all the images in the training set, and apply HOSVD to obtain a small set of orthonormal basis tensors that can span the principal subspace for all the training images. The coordinate, which best describes an image under this set of orthonormal basis tensors, is computed as the density character vector. Furthermore, a multi-class SVM classifier is designed to classify the extracted density character vectors into different density levels. Compared with traditional methods, we can make significant improvements to crowd density estimation. The experimental results show that the accuracy of our method achieves 96.33%, in which the misclassified images are all concentrated in their neighboring categories.
Crowd density analysis using subspace learning on local binary pattern Crowd density analysis is a crucial component in visual surveillance for security monitoring. This paper proposes a novel approach for crowd density estimation. The main contribution of this paper is two-fold: First, we propose to estimate crowd density at patch level, where the size of each patch varies in such way to compensate the effects of perspective distortions; second, instead of using raw features to represent each patch sample, we propose to learn a discriminant subspace of the high-dimensional Local Binary Pattern (LBP) raw feature vector where samples of different crowd density are optimally separated. The effectiveness of the proposed algorithm is evaluated on PETS dataset, and the results show that effective dimensionality reduction (DR) techniques significantly enhance the classification accuracy. The performance of the proposed framework is also compared to other frequently used features in crowd density estimation. Our proposed algorithm outperforms the state-of-the-art methods with a significant margin.
An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System. With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability.
Inferring fine-grained transport modes from mobile phone cellular signaling data Due to the ubiquity of mobile phones, mobile phone network data (e.g., Call Detail Records, CDR; and cellular signaling data, CSD), which are collected by mobile telecommunication operators for maintenance purposes, allow us to potentially study travel behaviors of a high percentage of the whole population, with full temporal coverage at a comparatively low cost. However, extracting mobility information such as transport modes from these data is very challenging, due to their low spatial accuracy and infrequent/irregular temporal characteristics. Existing studies relying on mobile phone network data mostly employed simple rule-based methods with geographic data, and focused on easy-to-detect transport modes (e.g., train and subway) or coarse-grained modes (e.g., public versus private transport). Meanwhile, due to the lack of ground truth data, evaluation of these methods was not reported, or only for aggregate data, and it is thus unclear how well the existing methods can detect modes of individual trips. This article proposes two supervised methods - one combining rule-based heuristics (RBH) with random forest (RF), and the other combining RBH with a fuzzy logic system - and a third, unsupervised method with RBH and k-medoids clustering, to detect fine-grained transport modes from CSD, particularly subway, train, tram, bike, car, and walk. Evaluation with a labeled ground truth dataset shows that the best performing method is the hybrid one with RBH and RF, where a classification accuracy of 73% is achieved when differentiating these modes. To our knowledge, this is the first study that distinguishes fine-grained transport modes in CSD and validates results with ground truth data. This study may thus inform future CSD-based applications in areas such as intelligent transport systems, urban/transport planning, and smart cities.
An aggregation approach to short-term traffic flow prediction In this paper, an aggregation approach is proposed for traffic flow prediction that is based on the moving average (MA), exponential smoothing (ES), autoregressive MA (ARIMA), and neural network (NN) models. The aggregation approach assembles information from relevant time series. The source time series is the traffic flow volume that is collected 24 h/day over several years. The three relevant time series are a weekly similarity time series, a daily similarity time series, and an hourly time series, which can be directly generated from the source time series. The MA, ES, and ARIMA models are selected to give predictions of the three relevant time series. The predictions that result from the different models are used as the basis of the NN in the aggregation stage. The output of the trained NN serves as the final prediction. To assess the performance of the different models, the naïve, ARIMA, nonparametric regression, NN, and data aggregation (DA) models are applied to the prediction of a real vehicle traffic flow, from which data have been collected at a data-collection point that is located on National Highway 107, Guangzhou, Guangdong, China. The outcome suggests that the DA model obtains a more accurate forecast than any individual model alone. The aggregation strategy can offer substantial benefits in terms of improving operational forecasting.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Distributed Representations, Simple Recurrent Networks, And Grammatical Structure In this paper three problems for a connectionist account of language are considered:1. What is the nature of linguistic representations?2. How can complex structural relationships such as constituent structure be represented?3. How can the apparently open-ended nature of language be accommodated by a fixed-resource system?Using a prediction task, a simple recurrent network (SRN) is trained on multiclausal sentences which contain multiply-embedded relative clauses. Principal component analysis of the hidden unit activation patterns reveals that the network solves the task by developing complex distributed representations which encode the relevant grammatical relations and hierarchical constituent structure. Differences between the SRN state representations and the more traditional pushdown store are discussed in the final section.
Social navigation: techniques for building more usable systems
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
Beamforming for MISO Interference Channels with QoS and RF Energy Transfer We consider a multiuser multiple-input single-output interference channel where the receivers are characterized by both quality-of-service (QoS) and radio-frequency (RF) energy harvesting (EH) constraints. We consider the power splitting RF-EH technique where each receiver divides the received signal into two parts a) for information decoding and b) for battery charging. The minimum required power that supports both the QoS and the RF-EH constraints is formulated as an optimization problem that incorporates the transmitted power and the beamforming design at each transmitter as well as the power splitting ratio at each receiver. We consider both the cases of fixed beamforming and when the beamforming design is incorporated into the optimization problem. For fixed beamforming we study three standard beamforming schemes, the zero-forcing (ZF), the regularized zero-forcing (RZF) and the maximum ratio transmission (MRT); a hybrid scheme, MRT-ZF, comprised of a linear combination of MRT and ZF beamforming is also examined. The optimal solution for ZF beamforming is derived in closed-form, while optimization algorithms based on second-order cone programming are developed for MRT, RZF and MRT-ZF beamforming to solve the problem. In addition, the joint-optimization of beamforming and power allocation is studied using semidefinite programming (SDP) with the aid of rank relaxation.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.016667
0
0
0
0
0
0
0
0
Disaster-Resilient Network Upgrade The manifold impacts of the current pandemic have highlighted the importance of reliable communication networks and services. As more and more people and services rely on this critical infrastructure, single link failure resilience is not sufficient anymore; networks must be disaster resilient. In this paper, we analyze the effects of disasters from a connectivity perspective and focus on reducing the likelihood of network disconnection in the event of a disaster through targeted link upgrades.In particular, we formalize the generalized Minimum Cost Disaster Resilient Network Upgrade Problem (DNP) (based on the previously published eFRADIR framework). We prove that this problem is NP-hard and as hard to approximate as the Knapsack Problem (KP). We present several methods for solving the DNP, in particular an ILP and two heuristics. We evaluate their performance on real networks and earthquake data and show that the upgrade cost of our disconnection probability based heuristic is only 3.5% higher than the optimum, while its resource consumption is negligible compared to the ILP.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Inspecting and Visualizing Distributed Bayesian Student Models Bayesian Belief Networks provide a principled, mathematically sound, and logically rational mechanism to represent student models. The belief net backbone structure proposed by Reye [14,15] offers a practical way to represent and update Bayesian student models describing both cognitive and social aspects of the learner. Considering students as active participants in the modelling process, this paper explores visualization and inspectability issues of Bayesian student modelling. This paper also presents ViSMod an integrated tool to visualize and inspect distributed Bayesian student models.
A web-based e-learning system for increasing study efficiency by stimulating learner's motivation Due to the opportunities provided by the Internet, more and more people are taking advantage of distance learning courses and during the last few years enormous research efforts have been dedicated to the development of distance learning systems. So far, many e-learning systems are proposed and used practically. However, in these systems the e-learning completion rate is about 30%. One of the reasons is the low study desire when the learner studies the learning materials. In this research, we propose an interactive Web-based e-learning system. The purpose of our system is to increase the e-learning completion rate by stimulating learner's motivation. The proposed system has three subsystems: the learning subsystem, learner support subsystem, and teacher support subsystem. The learning subsystem improves the learner's study desire. The learner support subsystem supports the learner during the study, and the teacher support subsystem supports the teacher to get the learner's study state. To evaluate the proposed system, we developed several experiments and surveys. By using new features such as: display of learner's study history, change of interface color, encourage function, ranking function, self-determination of the study materials, and grouping of learners, the proposed system can increase the learning efficiency.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Social navigation in web lectures Web lectures are a form of educational content that differs from classic hypertext in a number of ways. Web lectures are easier to produce and therefore large amounts of material become accumulated in a short time. The recordings are significantly less structured than traditional web based learning content and they are time based media. Both the lack of structure and their time based nature pose difficulties for navigation in web lectures. The approach presented in this paper applies the basic concept of social navigation to facilitate navigation in web lectures. Social navigation support has been successfully employed for hypertext and picture augmented hypertext in the education domain. This paper describes how social navigation can be implemented for web lectures and how it can be used to augment existent navigation features.
Lifelong Learner Modeling for Lifelong Personalized Pervasive Learning Pervasive and ubiquitous computing have the potential to make huge changes in the ways that we will learn, throughout our lives. This paper presents a vision for the lifelong user model as a first class citizen, existing independently of any single application and controlled by the learner. The paper argues that this is a critical foundation for a vision of personalised lifelong learning as well as a form of augmented cognition that enables learners to supplement their own knowledge with readily accessible digital information based on documents that they have accessed or used. The paper presents work that provides foundations for this vision for a lifelong user model. First, it outlines technical issues and research into approaches for addressing them. Then it presents work on the interface between the learner and the lifelong user model because the human issues of control and privacy are so central. The final discussion and conclusions draw upon these to define a roadmap for future research in a selection of the key areas that will underpin this vision of the lifelong user model.
Adaptive Navigation Support Adaptive navigation support is a specific group of technologies that support user navigation in hyperspace, by adapting to the goals, preferences and knowledge of the individual user. These technologies, originally developed in the field of adaptive hypermedia, are becoming increasingly important in several adaptive Web applications, ranging from Web-based adaptive hypermedia to adaptive virtual reality. This chapter provides a brief introduction to adaptive navigation support, reviews major adaptive navigation support technologies and mechanisms, and illustrates these with a range of examples.
Understanding Behaviours and Roles for Social and Adaptive Robots In Education: Teacher's Perspective. In order to establish a long-term relationship between a robot and a child, robots need to learn from the environment, adapt to specific user needs and display behaviours and roles accordingly. Literature shows that certain robot behaviours could negatively impact child's learning and performance. Therefore, the purpose of the present study is to not only understand teacher's opinion on the existing effective social behaviours and roles but also to understand novel behaviours that can positively influence children performance in a language learning setting. In this paper, we present our results based on interviews conducted with 8 language teachers to get their opinion on how a robot can efficiently perform behaviour adaptation to influence learning and achieve long-term engagement. We also present results on future directions extracted from the interviews with teachers.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Delay-Aware Microservice Coordination in Mobile Edge Computing: A Reinforcement Learning Approach As an emerging service architecture, microservice enables decomposition of a monolithic web service into a set of independent lightweight services which can be executed independently. With mobile edge computing, microservices can be further deployed in edge clouds dynamically, launched quickly, and migrated across edge clouds easily, providing better services for users in proximity. However, the user mobility can result in frequent switch of nearby edge clouds, which increases the service delay when users move away from their serving edge clouds. To address this issue, this article investigates microservice coordination among edge clouds to enable seamless and real-time responses to service requests from mobile users. The objective of this work is to devise the optimal microservice coordination scheme which can reduce the overall service delay with low costs. To this end, we first propose a dynamic programming-based offline microservice coordination algorithm, that can achieve the globally optimal performance. However, the offline algorithm heavily relies on the availability of the prior information such as computation request arrivals, time-varying channel conditions and edge cloud's computation capabilities required, which is hard to be obtained. Therefore, we reformulate the microservice coordination problem using Markov decision process framework and then propose a reinforcement learning-based online microservice coordination algorithm to learn the optimal strategy. Theoretical analysis proves that the offline algorithm can find the optimal solution while the online algorithm can achieve near-optimal performance. Furthermore, based on two real-world datasets, i.e., the Telecom's base station dataset and Taxi Track dataset from Shanghai, experiments are conducted. The experimental results demonstrate that the proposed online algorithm outperforms existing algorithms in terms of service delay and migration costs, and the achieved performance is close to the optimal performance obtained by the offline algorithm.
Trust in Automation: Designing for Appropriate Reliance. Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
GROPING: Geomagnetism and cROwdsensing Powered Indoor NaviGation Although a large number of WiFi fingerprinting based indoor localization systems have been proposed, our field experience with Google Maps Indoor (GMI), the only system available for public testing, shows that it is far from mature for indoor navigation. In this paper, we first report our field studies with GMI, as well as experiment results aiming to explain our unsatisfactory GMI experience. Then motivated by the obtained insights, we propose GROPING as a self-contained indoor navigation system independent of any infrastructural support. GROPING relies on geomagnetic fingerprints that are far more stable than WiFi fingerprints, and it exploits crowdsensing to construct floor maps rather than expecting individual venues to supply digitized maps. Based on our experiments with 20 participants in various floors of a big shopping mall, GROPING is able to deliver a sufficient accuracy for localization and thus provides smooth navigation experience.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.059454
0.040731
0.040731
0.040731
0.014185
0.008661
0.000211
0
0
0
0
0
0
0
A survey on concept drift adaptation. Concept drift primarily refers to an online supervised learning scenario when the relation between the input data and the target variable changes over time. Assuming a general knowledge of supervised learning in this article, we characterize adaptive learning processes; categorize existing strategies for handling concept drift; overview the most representative, distinct, and popular techniques and algorithms; discuss evaluation methodology of adaptive algorithms; and present a set of illustrative applications. The survey covers the different facets of concept drift in an integrated way to reflect on the existing scattered state of the art. Thus, it aims at providing a comprehensive introduction to the concept drift adaptation for researchers, industry analysts, and practitioners.
Heterogeneous ensemble for feature drifts in data streams The nature of data streams requires classification algorithms to be real-time, efficient, and able to cope with high-dimensional data that are continuously arriving. It is a known fact that in high-dimensional datasets, not all features are critical for training a classifier. To improve the performance of data stream classification, we propose an algorithm called HEFT-Stream (H eterogeneous E nsemble with F eature drifT for Data Streams ) that incorporates feature selection into a heterogeneous ensemble to adapt to different types of concept drifts. As an example of the proposed framework, we first modify the FCBF [13] algorithm so that it dynamically update the relevant feature subsets for data streams. Next, a heterogeneous ensemble is constructed based on different online classifiers, including Online Naive Bayes and CVFDT [5]. Empirical results show that our ensemble classifier outperforms state-of-the-art ensemble classifiers (AWE [15] and OnlineBagging [21]) in terms of accuracy, speed, and scalability. The success of HEFT-Stream opens new research directions in understanding the relationship between feature selection techniques and ensemble learning to achieve better classification performance.
A Survey on Feature Drift Adaptation Mining data streams is of the utmost importance due to its appearance in many real-world situations, such as: sensor networks, stock market analysis and computer networks intrusion detection systems. Data streams are, by definition, potentially unbounded sequences of data that arrive intermittently at rapid rates. Extracting useful knowledge from data streams embeds virtually all problems from conventional data mining with the addition of single-pass real-time processing within limited time and memory space. Additionally, due to its ephemeral nature, it is expected that streams undergo changes in its data distribution denominated concept drifts. In this work, we focus on one specific kind of concept drift that has not been extensively addressed in the literature, namely feature drift. A feature drift happens when changes occur in the set of features, such that a subset of features become, or cease to be, relevant to the learning problem. Specifically, changes in the relevance of features directly imply modifications in the decision boundary to be learned, thus the learner must detect and adapt to according to it. Timely detection and recover from feature drifts is a challenging task that can be modeled after a dynamic feature selection problem. In this paper we survey existing work on dynamic feature selection for data streams that acts either implicitly or explicitly. We conclude that there is a need for future research in this area, which we highlight as future research directions.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
A Survey of the Usages of Deep Learning for Natural Language Processing Over the last several years, the field of natural language processing has been propelled forward by an explosion in the use of deep learning models. This article provides a brief introduction to the field and a quick overview of deep learning architectures and methods. It then sifts through the plethora of recent studies and summarizes a large assortment of relevant contributions. Analyzed research areas include several core linguistic processing issues in addition to many applications of computational linguistics. A discussion of the current state of the art is then provided along with recommendations for future research in the field.
TrafficGAN: Network-Scale Deep Traffic Prediction With Generative Adversarial Nets AbstractTraffic flow prediction has received rising research interest recently since it is a key step to prevent and relieve traffic congestion in urban areas. Existing methods mostly focus on road-level or region-level traffic prediction, and fail to deeply capture the high-order spatial-temporal correlations among the road links to perform a road network-level prediction. In this paper, we propose a network-scale deep traffic prediction model called TrafficGAN, in which Generative Adversarial Nets (GAN) is utilized to predict traffic flows under an adversarial learning framework. To capture the spatial-temporal correlations among the road links of a road network, both Convolutional Neural Nets (CNN) and Long-Short Term Memory (LSTM) models are embedded into TrafficGAN. In addition, we also design a deformable convolution kernel for CNN to make it better handle the input road network data. We extensively evaluate our proposal over two large GPS probe datasets in the arterial road network of downtown Chicago and Bay Area of California. The results show that TrafficGAN significantly outperforms both traditional statistical models and state-of-the-art deep learning models in network-scale short-term traffic flow prediction.
Mf-Cnn: Traffic Flow Prediction Using Convolutional Neural Network And Multi-Features Fusion Accurate traffic flow prediction is the precondition for many applications in Intelligent Transportation Systems, such as traffic control and route guidance. Traditional data driven traffic flow prediction models tend to ignore traffic self-features (e.g., periodicities), and commonly suffer from the shifts brought by various complex factors (e.g., weather and holidays). These would reduce the precision and robustness of the prediction models. To tackle this problem, in this paper, we propose a CNN-based multi-feature predictive model (MF-CNN) that collectively predicts network-scale traffic flow with multiple spatiotemporal features and external factors (weather and holidays). Specifically, we classify traffic self-features into temporal continuity as short-term feature, daily periodicity and weekly periodicity as long-term features, then map them to three two-dimensional spaces, which each one is composed of time and space, represented by two-dimensional matrices. The high-level spatiotemporal features learned by CNNs from the matrices with different time lags are further fused with external factors by a logistic regression layer to derive the final prediction. Experimental results indicate that the MF-CNN model considering multi-features improves the predictive performance compared to five baseline models, and achieves the trade-off between accuracy and efficiency.
Generative adversarial networks Generative adversarial networks are a kind of artificial intelligence algorithm designed to solve the generative modeling problem. The goal of a generative model is to study a collection of training examples and learn the probability distribution that generated them. Generative Adversarial Networks (GANs) are then able to generate more examples from the estimated probability distribution. Generative models based on deep learning are common, but GANs are among the most successful generative models (especially in terms of their ability to generate realistic high-resolution images). GANs have been successfully applied to a wide variety of tasks (mostly in research settings) but continue to present unique challenges and research opportunities because they are based on game theory while most other approaches to generative modeling are based on optimization.
Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction. Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from large-scale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-of-the-art methods.
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Development and Control of a ‘Soft-Actuated’ Exoskeleton for Use in Physiotherapy and Training Full or partial loss of function in the upper limb is an increasingly common due to sports injuries, occupational injuries, spinal cord injuries, and strokes. Typically treatment for these conditions relies on manipulative physiotherapy procedures which are extremely labour intensive. Although mechanical assistive device exist for limbs this is rare for the upper body.In this paper we describe the construction and testing of a seven degree of motion prototype upper arm training/rehabilitation (exoskeleton) system. The total weight of the uncompensated orthosis is less than 2 kg. This low mass is primarily due to the use of a new range of pneumatic Muscle Actuators (pMA) as power source for the system. This type of actuator, which has also an excellent power/weight ratio, meets the need for safety, simplicity and lightness. The work presented shows how the system takes advantage of the inherent controllable compliance to produce a unit that is extremely powerful, providing a wide range of functionality (motion and forces over an extended range) in a manner that has high safety integrity for the patient. A training control scheme is introduced which is used to control the orthosis when used as exercise facility. Results demonstrate the potential of the device as an upper limb training, rehabilitation and power assist (exoskeleton) system.
Delay-independent stability of homogeneous systems. A class of nonlinear systems with homogeneous right-hand sides and time-varying delay is studied. It is assumed that the trivial solution of a system is asymptotically stable when delay is equal to zero. By the usage of the Lyapunov direct method and the Razumikhin approach, it is proved that the asymptotic stability of the zero solution of the system is preserved for an arbitrary continuous nonnegative and bounded delay. The conditions of stability of time-delay systems by homogeneous approximation are obtained. Furthermore, it is shown that the presented approaches permit to derive delay-independent stability conditions for some types of nonlinear systems with distributed delay. Two examples of nonlinear oscillatory systems are given to demonstrate the effectiveness of our results.
Efficient and Low Latency Detection of Intruders in Mobile Active Authentication. Active authentication (AA) refers to the problem of continuously verifying the identity of a mobile device user for the purpose of securing the device. We address the problem of quickly detecting intrusions with lower false detection rates in mobile AA systems with higher resource efficiency. Bayesian and MiniMax versions of the quickest change detection (QCD) algorithms are introduced to quickly ...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.080262
0.073683
0.055262
0.05
0.05
0.05
0.05
0.0125
0.000083
0
0
0
0
0
Image representation by harmonic transforms with parameters in SL(2, R). •New kinds of invariant harmonic transforms with parameters in SL(2,R) are proposed.•The capabilities of the proposed PLCT and the 2-D LCTS on image representation are analyzed.
Combined invariants to similarity transformation and to blur using orthogonal Zernike moments. The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better.
Fast computation of Jacobi-Fourier moments for invariant image recognition The Jacobi-Fourier moments (JFMs) provide a wide class of orthogonal rotation invariant moments (ORIMs) which are useful for many image processing, pattern recognition and computer vision applications. They, however, suffer from high time complexity and numerical instability at high orders of moment. In this paper, a fast method based on the recursive computation of radial kernel function of JFMs is proposed which not only reduces time complexity but also improves their numerical stability. Fast recursive method for the computation of Jacobi-Fourier moments is proposed.The proposed method not only reduces time complexity but also improves numerical stability of moments.Better image reconstruction is achieved with lower reconstruction error.Proposed method is useful for many image processing, pattern recognition and computer vision applications.
Radial shifted Legendre moments for image analysis and invariant image recognition. The rotation, scaling and translation invariant property of image moments has a high significance in image recognition. Legendre moments as a classical orthogonal moment have been widely used in image analysis and recognition. Since Legendre moments are defined in Cartesian coordinate, the rotation invariance is difficult to achieve. In this paper, we first derive two types of transformed Legendre polynomial: substituted and weighted radial shifted Legendre polynomials. Based on these two types of polynomials, two radial orthogonal moments, named substituted radial shifted Legendre moments and weighted radial shifted Legendre moments (SRSLMs and WRSLMs) are proposed. The proposed moments are orthogonal in polar coordinate domain and can be thought as generalized and orthogonalized complex moments. They have better image reconstruction performance, lower information redundancy and higher noise robustness than the existing radial orthogonal moments. At last, a mathematical framework for obtaining the rotation, scaling and translation invariants of these two types of radial shifted Legendre moments is provided. Theoretical and experimental results show the superiority of the proposed methods in terms of image reconstruction capability and invariant recognition accuracy under both noisy and noise-free conditions.
Lossless medical image watermarking method based on significant difference of cellular automata transform coefficient. Conventional medical image watermarking techniques focus on improving invisibility and robustness of the watermarking mechanism to prevent medical disputes. This paper proposes a medical image watermarking algorithm based on the significant difference of cellular automata transform (CAT) for copyright protection. The medical image is firstly subsampled into four subimages, and two images are randomly chosen to obtain two low-frequency bandwidths using CAT. Coefficients within a low-frequency bandwidth are important information in an image. Hence, the difference between two low-frequency bandwidths is used as an important feature in the medical image. From these important features, watermarks and cover images can be used to generate an ownership share image (OSI) used for verifying the medical image. Besides appearing like cover images, the OSI will also be able to register with a third party. When a suspected medical image requires verification, the important features from the suspected medical image are first extracted. The master share image (MSI) can be generated from the important features from the suspected medical image. Lastly, the OSI and MSI can be combined to extract the watermark to verify the suspected medical image. The advantage of our proposed method is that the medical image does not require alteration to protect the copyright of the medical image. This means that while the image is protected, medical disputes will be unlikely and the appearance of the registered OSI will carry significant data to make management more convenient. Lastly, the proposed method has the features of having better security, invisibility, and robustness. Moreover, experimental results have demonstrated that our method results in good performance.
Robust watermarking scheme for color image based on quaternion-type moment invariants and visual cryptography. This paper introduces a novel robust watermarking scheme for copyright protection of color image based on quaternion-type moment invariants and visual cryptography. As a secure way to allow secret sharing of images, visual cryptography realizes encryption of classified information and the decryption is performed through human visual system. The proposed scheme represents the color image into a quaternion matrix, so that it can deal with the multichannel information in a holistic way. Then the quaternion moments are applied to extract the invariant features, which are crucial to generate the master share. Together with the scrambled watermark, they are used for constructing the ownership share based on visual cryptography. Afterwards, the ownership share is registered and responsible for authentication. A set of experiments has been conducted to illustrate the validity and feasibility of the proposed scheme as well as better robustness against different attacks.
Two-Dimensional Polar Harmonic Transforms for Invariant Image Representation This paper introduces a set of 2D transforms, based on a set of orthogonal projection bases, to generate a set of features which are invariant to rotation. We call these transforms Polar Harmonic Transforms (PHTs). Unlike the well-known Zernike and pseudo-Zernike moments, the kernel computation of PHTs is extremely simple and has no numerical stability issue whatsoever. This implies that PHTs encompass the orthogonality and invariance advantages of Zernike and pseudo-Zernike moments, but are free from their inherent limitations. This also means that PHTs are well suited for application where maximal discriminant information is needed. Furthermore, PHTs make available a large set of features for further feature selection in the process of seeking for the best discriminative or representative features for a particular application.
Tabu Search - Part I
Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider a MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources􀀀the transmit precoding matrices of the MUs􀀀and the computational resources􀀀the CPU cycles/second assigned by the cloud to each MU􀀀in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.
Symbolic model checking for real-time systems We describe finite-state programs over real-numbered time in a guarded-command language with real-valued clocks or, equivalently, as finite automata with real-valued clocks. Model checking answers the question which states of a real-time program satisfy a branching-time specification (given in an extension of CTL with clock variables). We develop an algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space. For this purpose, we introduce a μ-calculus on computation trees over real-numbered time. Unfortunately, many standard program properties, such as response for all nonzeno execution sequences (during which time diverges), cannot be characterized by fixpoints: we show that the expressiveness of the timed μ-calculus is incomparable to the expressiveness of timed CTL. Fortunately, this result does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values. All timed CTL properties of such programs are shown to be computable as finitely approximable fixpoints in a simple decidable theory.
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
An Automatic Screening Approach for Obstructive Sleep Apnea Diagnosis Based on Single-Lead Electrocardiogram Traditional approaches for obstructive sleep apnea (OSA) diagnosis are apt to using multiple channels of physiological signals to detect apnea events by dividing the signals into equal-length segments, which may lead to incorrect apnea event detection and weaken the performance of OSA diagnosis. This paper proposes an automatic-segmentation-based screening approach with the single channel of Electrocardiogram (ECG) signal for OSA subject diagnosis, and the main work of the proposed approach lies in three aspects: (i) an automatic signal segmentation algorithm is adopted for signal segmentation instead of the equal-length segmentation rule; (ii) a local median filter is improved for reduction of the unexpected RR intervals before signal segmentation; (iii) the designed OSA severity index and additional admission information of OSA suspects are plugged into support vector machine (SVM) for OSA subject diagnosis. A real clinical example from PhysioNet database is provided to validate the proposed approach and an average accuracy of 97.41% for subject diagnosis is obtained which demonstrates the effectiveness for OSA diagnosis.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.1
0.022222
0
0
0
0
0
0
0
Cross-domain person re-identification by hybrid supervised and unsupervised learning Although the single-domain person re-identification (Re-ID) method has achieved great accuracy, the dependence on the label in the same image domain severely limits the scalability of this method. Therefore, cross-domain Re-ID has received more and more attention. In this paper, a novel cross-domain Re-ID method combining supervised and unsupervised learning is proposed, which includes two models: a triple-condition generative adversarial network (TC-GAN) and a dual-task feature extraction network (DFE-Net). We first use TC-GAN to generate labeled images with the target style, and then we combine supervised and unsupervised learning to optimize DFE-Net. Specifically, we use labeled generated data for supervised learning. In addition, we mine effective information in the target data from two perspectives for unsupervised learning. To effectively combine the two types of learning, we design a dynamic weighting function to dynamically adjust the weights of these two approaches. To verify the validity of TC-GAN, DFE-Net, and the dynamic weight function, we conduct multiple experiments on Market-1501 and DukeMTMC-reID. The experimental results show that the dynamic weight function can improve the performance of the models, and our method is better than many state-of-the-art methods.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Dynamic Trajectory Planning for Vehicle Autonomous Driving Trajectory planning is one of the key and challenging tasks in autonomous driving. This paper proposes a novel method that dynamically plans trajectories, with the aim to achieve quick and safe reaction to the changing driving environment and optimal balance between vehicle performance and driving comfort. With the proposed method, such complex maneuvers can be decomposed into two sub-maneuvers, i.e., lane change and lane keeping, or their combinations, such that the trajectory planning is generalized and simplified, mainly based on lane change maneuvers. A two fold optimization-based method is proposed for stationary trajectory planning as well as dynamic trajectory planning in the presence of a dynamic traffic environment. Simulation is conducted to demonstrate the efficiency and effectiveness of the proposed method.
Testing autonomous cars for feature interaction failures using many-objective search. Complex systems such as autonomous cars are typically built as a composition of features that are independent units of functionality. Features tend to interact and impact one another's behavior in unknown ways. A challenge is to detect and manage feature interactions, in particular, those that violate system requirements, hence leading to failures. In this paper, we propose a technique to detect feature interaction failures by casting this problem into a search-based test generation problem. We define a set of hybrid test objectives (distance functions) that combine traditional coverage-based heuristics with new heuristics specifically aimed at revealing feature interaction failures. We develop a new search-based test generation algorithm, called FITEST, that is guided by our hybrid test objectives. FITEST extends recently proposed many-objective evolutionary algorithms to reduce the time required to compute fitness values. We evaluate our approach using two versions of an industrial self-driving system. Our results show that our hybrid test objectives are able to identify more than twice as many feature interaction failures as two baseline test objectives used in the software testing literature (i.e., coverage-based and failure-based test objectives). Further, the feedback from domain experts indicates that the detected feature interaction failures represent real faults in their systems that were not previously identified based on analysis of the system features and their requirements.
Deep Autoencoder Neural Networks for Short-Term Traffic Congestion Prediction of Transportation Networks. Traffic congestion prediction is critical for implementing intelligent transportation systems for improving the efficiency and capacity of transportation networks. However, despite its importance, traffic congestion prediction is severely less investigated compared to traffic flow prediction, which is partially due to the severe lack of large-scale high-quality traffic congestion data and advanced algorithms. This paper proposes an accessible and general workflow to acquire large-scale traffic congestion data and to create traffic congestion datasets based on image analysis. With this workflow we create a dataset named Seattle Area Traffic Congestion Status (SATCS) based on traffic congestion map snapshots from a publicly available online traffic service provider Washington State Department of Transportation. We then propose a deep autoencoder-based neural network model with symmetrical layers for the encoder and the decoder to learn temporal correlations of a transportation network and predicting traffic congestion. Our experimental results on the SATCS dataset show that the proposed DCPN model can efficiently and effectively learn temporal relationships of congestion levels of the transportation network for traffic congestion forecasting. Our method outperforms two other state-of-the-art neural network models in prediction performance, generalization capability, and computation efficiency.
Testing Scenario Library Generation for Connected and Automated Vehicles: An Adaptive Framework How to generate testing scenario libraries for connected and automated vehicles (CAVs) is a major challenge faced by the industry. In previous studies, to evaluate maneuver challenge of a scenario, surrogate models (SMs) are often used without explicit knowledge of the CAV under test. However, performance dissimilarities between the SM and the CAV under test usually exist, and it can lead to the generation of suboptimal scenario libraries. In this article, an adaptive testing scenario library generation (ATSLG) method is proposed to solve this problem. A customized testing scenario library for a specific CAV model is generated through an adaptive process. To compensate for the performance dissimilarities and leverage each test of the CAV, Bayesian optimization techniques are applied with classification-based Gaussian Process Regression and a newly designed acquisition function. Comparing with a pre-determined library, a CAV can be tested and evaluated in a more efficient manner with the customized library. To validate the proposed method, a cut-in case study is investigated and the results demonstrate that the proposed method can further accelerate the evaluation process by a few orders of magnitude.
Deep Reinforcement Learning for Mobile Edge Caching: Review, New Features, and Open Issues. Mobile edge caching is a promising technique to reduce network traffic and improve the quality of experience of mobile users. However, mobile edge caching is a challenging decision making problem with unknown future content popularity and complex network characteristics. In this article, we advocate the use of DRL to solve mobile edge caching problems by presenting an overview of recent works on m...
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
Parallel Control and Management for Intelligent Transportation Systems: Concepts, Architectures, and Applications Parallel control and management have been proposed as a new mechanism for conducting operations of complex systems, especially those that involved complexity issues of both engineering and social dimensions, such as transportation systems. This paper presents an overview of the background, concepts, basic methods, major issues, and current applications of Parallel transportation Management Systems (PtMS). In essence, parallel control and management is a data-driven approach for modeling, analysis, and decision-making that considers both the engineering and social complexity in its processes. The developments and applications described here clearly indicate that PtMS is effective for use in networked complex traffic systems and is closely related to emerging technologies in cloud computing, social computing, and cyberphysical-social systems. A description of PtMS system architectures, processes, and components, including OTSt, Dyna CAS, aDAPTS, iTOP, and TransWorld is presented and discussed. Finally, the experiments and examples of real-world applications are illustrated and analyzed.
Hierarchical mesh segmentation based on fitting primitives In this paper, we describe a hierarchical face clustering algorithm for triangle meshes based on fitting primitives belonging to an arbitrary set. The method proposed is completely automatic, and generates a binary tree of clusters, each of which is fitted by one of the primitives employed. Initially, each triangle represents a single cluster; at every iteration, all the pairs of adjacent clusters are considered, and the one that can be better approximated by one of the primitives forms a new single cluster. The approximation error is evaluated using the same metric for all the primitives, so that it makes sense to choose which is the most suitable primitive to approximate the set of triangles in a cluster.Based on this approach, we have implemented a prototype that uses planes, spheres and cylinders, and have experimented that for meshes made of 100 K faces, the whole binary tree of clusters can be built in about 8 s on a standard PC.The framework described here has natural application in reverse engineering processes, but it has also been tested for surface denoising, feature recovery and character skinning.
Power-Domain Non-Orthogonal Multiple Access (NOMA) in 5G Systems: Potentials and Challenges. Non-orthogonal multiple access (NOMA) is one of the promising radio access techniques for performance enhancement in next-generation cellular communications. Compared to orthogonal frequency division multiple access, which is a well-known high-capacity orthogonal multiple access technique, NOMA offers a set of desirable benefits, including greater spectrum efficiency. There are different types of ...
Driver’s Intention Identification With the Involvement of Emotional Factors in Two-Lane Roads Driver’s emotion is a psychological reaction to environmental stimulus. Driver intention is an internal state of mind, which directs the actions in the next moment during driving. Emotions usually have a strong influence on behavioral intentions. Therefore, emotion is an important factor that should be considered, to accurately identify driver’s intention. This study used the support vector machin...
1.005276
0.004651
0.004651
0.004651
0.004651
0.004651
0.004651
0.004651
0.002326
0.000848
0.000227
0
0
0
Convergence of IoT and product lifecycle management in medical health care. Emerging trends in Internet of Medical Things (IoMT) or Medical Internet of Things (MIoT), and miniaturized devices with have entirely changed the landscape of the every corner. Main challenges that heterogeneous sensor-enabled devices are facing during the connectivity and convergence with other domains are, first, the information/knowledge sharing and collaboration between several communicating parties such as, from manufacturing engineer to medical expert, then from hospitals/healthcare centers to patients during disease diagnosis and treatment. Second, battery lifecycle and energy management of wearable/portable devices. This paper solves first problem by integrating IoMT with Product Lifecycle Management (PLM), to regulate the information transfer from one entity to another and between devices in an efficient and accurate way. While, second issue is resolved by proposing two, battery recovery-based algorithm (BRA), and joint energy harvesting and duty-cycle optimization-based (JEHDO) algorithm for managing the battery lifecycle and energy of the resource-constrained tiny wearable devices, respectively. Besides, a novel joint IoMT and PLM based framework is proposed for medical healthcare applications. Experimental results reveal that BRA and JEHDO are battery-efficient and energy-efficient respectively.
Privacy Enabled Digital Rights Management Without Trusted Third Party Assumption Digital rights management systems are required to provide security and accountability without violating the privacy of the entities involved. However, achieving privacy along with accountability in the same framework is hard as these attributes are mutually contradictory. Thus, most of the current digital rights management systems rely on trusted third parties to provide privacy to the entities involved. However, a trusted third party can become malicious and break the privacy protection of the entities in the system. Hence, in this paper, we propose a novel privacy preserving content distribution mechanism for digital rights management without relying on the trusted third party assumption. We use simple primitives such as blind decryption and one way hash chain to avoid the trusted third party assumption. We prove that our scheme is not prone to the “oracle problem” of the blind decryption mechanism. The proposed mechanism supports access control without degrading user's privacy as well as allows revocation of even malicious users without violating their privacy.
An efficient conditionally anonymous ring signature in the random oracle model A conditionally anonymous ring signature is an exception since the anonymity is conditional. Specifically, it allows an entity to confirm/refute the signature that he generated before. A group signature also shares the same property since a group manager can revoke a signer's anonymity using the trapdoor information. However, the special node (i.e., group manager) does not exist in the group in order to satisfy the ad hoc fashion. In this paper, we construct a new conditionally anonymous ring signature, in which the actual signer can be traced without the help of the group manager. The big advantage of the confirmation and disavowal protocols designed by us are non-interactive with constant costs while the known schemes suffer from the linear cost in terms of the ring size n or security parameter s.
Threats to Networking Cloud and Edge Datacenters in the Internet of Things. Several application domains are collecting data using Internet of Things sensing devices and shipping it to remote cloud datacenters for analysis (fusion, storage, and processing). Data analytics activities raise a new set of technical challenges from the perspective of ensuring end-to-end security and privacy of data as it travels from an edge datacenter (EDC) to a cloud datacenter (CDC) (or vice...
SAFE: Secure Appliance Scheduling for Flexible and Efficient Energy Consumption for Smart Home IoT Smart homes (SHs) aim at forming an energy optimized environment that can efficiently regulate the use of various Internet of Things (IoT) devices in its network. Real-time electricity pricing models along with SHs provide users an opportunity to reduce their electricity expenditure by responding to the pricing that varies with different times of the day, resulting in reducing the expenditure at both customers’ and utility provider’s end. However, responding to such prices and effectively scheduling the appliances under such complex dynamics is a challenging optimization problem to be solved by the provider or by third party services. As communication in SH-IoT environment is extremely sensitive and private, reporting of such usage information to the provider to solve the optimization has a potential risk that the provider or third party services may track users’ energy consumption profile which compromises users’ privacy. To address these issues, we developed a homomorphic encryption-based alternating direction method of multipliers approach to solve the cost-aware appliance scheduling optimization in a distributed manner and schedule home appliances without leaking users’ privacy. Through extensive simulation study considering real-world datasets, we show that the proposed secure appliance scheduling for flexible and efficient energy consumption scheme, namely SAFE, effectively lowers electricity cost while preserving users’ privacy.
A Blockchain-Based Scheme For Privacy-Preserving And Secure Sharing Of Medical Data How to alleviate the contradiction between the patient's privacy and the research or com-mercial demands of health data has become the challenging problem of intelligent medical system with the exponential increase of medical data. In this paper, a blockchainbased privacy-preserving scheme is proposed, which realizes secure sharing of medical data between several entities involved patients, research institutions and semi-trusted cloud servers. And meanwhile, it achieves the data availability and consistency between patients and research institutions, where zero-knowledge proof is employed to verify whether the patient's medical data meets the specific requirements proposed by research institutions without revealing patients' privacy, and then the proxy re-encryption technology is adopted to ensure that research institutions can decrypt the intermediary ciphertext. In addition, this proposal can execute distributed consensus based on PBFT algorithm for transactions between patients and research institutions according to the prearranged terms. Theoretical analysis shows the proposed scheme can satisfy security and privacy requirements such as confidentiality, integrity and availability, as well as performance evaluation demonstrates it is feasible and efficient in contrast with other typical schemes. (C) 2020 Elsevier Ltd. All rights reserved.
Chaos-Based Content Distribution Framework for Digital Rights Management System Multimedia contents are digitally utilized these days. Thus, the development of an effective method to access the content is becoming the topmost priority of the entertainment industry to protect the digital content from unauthorized access. Digital rights management (DRM) systems are the technique that makes digital content accessible only to the legal rights holders. As the Internet of Things environment is used in the distribution and access of digital content, a secure and efficient content delivery mechanism is also required. Keeping the focus on these points, this article proposes a content distribution framework for DRM system using chaotic map. Formal security verification under the random oracle model, which uncovers the proposed protocol's capability to resist the critical attacks is given. Moreover, simulation study for security verification is performed using the broadly accepted “automated validation of Internet security protocols and applications,” which indicates that the protocol is safe. Moreover, the detailed comparative study with related protocols demonstrates that it provides better security and improves the computational and communication efficiency.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Constrained Multiobjective Optimization for IoT-Enabled Computation Offloading in Collaborative Edge and Cloud Computing Internet-of-Things (IoT) applications are becoming more resource-hungry and latency-sensitive, which are severely constrained by limited resources of current mobile hardware. Mobile cloud computing (MCC) can provide abundant computation resources, while mobile-edge computing (MEC) aims to reduce the transmission latency by offloading complex tasks from IoT devices to nearby edge servers. It is sti...
MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition Decomposition is a basic strategy in traditional multiobjective optimization. However, it has not yet been widely used in multiobjective evolutionary optimization. This paper proposes a multiobjective evolutionary algorithm based on decomposition (MOEA/D). It decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Each subproblem is optimized by only using information from its several neighboring subproblems, which makes MOEA/D have lower computational complexity at each generation than MOGLS and nondominated sorting genetic algorithm II (NSGA-II). Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjective optimization problems. It has been shown that MOEA/D using objective normalization can deal with disparately-scaled objectives, and MOEA/D with an advanced decomposition method can generate a set of very evenly distributed solutions for 3-objective test instances. The ability of MOEA/D with small population, the scalability and sensitivity of MOEA/D have also been experimentally investigated in this paper.
A Model for Understanding How Virtual Reality Aids Complex Conceptual Learning Designers and evaluators of immersive virtual reality systems have many ideas concerning how virtual reality can facilitate learning. However, we have little information concerning which of virtual reality's features provide the most leverage for enhancing understanding or how to customize those affordances for different learning environments. In part, this reflects the truly complex nature of learning. Features of a learning environment do not act in isolation; other factors such as the concepts or skills to be learned, individual characteristics, the learning experience, and the interaction experience all play a role in shaping the learning process and its outcomes. Through Project Science Space, we have been trying to identify, use, and evaluate immersive virtual reality's affordances as a means to facilitate the mastery of complex, abstract concepts. In doing so, we are beginning to understand the interplay between virtual reality's features and other important factors in shaping the learning process and learning outcomes for this type of material. In this paper, we present a general model that describes how we think these factors work together and discuss some of the lessons we are learning about virtual reality's affordances in the context of this model for complex conceptual learning.
Solving the data sparsity problem in destination prediction Destination prediction is an essential task for many emerging location-based applications such as recommending sightseeing places and targeted advertising according to destinations. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, almost all the existing techniques use various kinds of extra information such as road network, proprietary travel planner, statistics requested from government, and personal driving habits. Such extra information, in most circumstances, is unavailable or very costly to obtain. Thereby we approach the task of destination prediction by using only historical trajectory dataset. However, this approach encounters the \"data sparsity problem\", i.e., the available historical trajectories are far from enough to cover all possible query trajectories, which considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) to address the data sparsity problem. SubSyn first decomposes historical trajectories into sub-trajectories comprising two adjacent locations, and then connects the sub-trajectories into \"synthesised\" trajectories. This process effectively expands the historical trajectory dataset to contain much more trajectories. Experiments based on real datasets show that SubSyn can predict destinations for up to ten times more query trajectories than a baseline prediction algorithm. Furthermore, the running time of the SubSyn-training algorithm is almost negligible for a large set of 1.9 million trajectories, and the SubSyn-prediction algorithm runs over two orders of magnitude faster than the baseline prediction algorithm constantly.
Design of robust fuzzy fault detection filter for polynomial fuzzy systems with new finite frequency specifications This paper investigates the problem of fault detection filter design for discrete-time polynomial fuzzy systems with faults and unknown disturbances. The frequency ranges of the faults and the disturbances are assumed to be known beforehand and to reside in low, middle or high frequency ranges. Thus, the proposed filter is designed in the finite frequency range to overcome the conservatism generated by those designed in the full frequency domain. Being of polynomial fuzzy structure, the proposed filter combines the H−/H∞ performances in order to ensure the best robustness to the disturbance and the best sensitivity to the fault. Design conditions are derived in Sum Of Squares formulations that can be easily solved via available software tools. Two illustrative examples are introduced to demonstrate the effectiveness of the proposed method and a comparative study with LMI method is also provided.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
The potential of immersive virtual reality for representations in design education This paper examines the potential of immersive virtual reality technology for design education. A quasi-experimental study has been conducted with 40 students of different expertise levels. The students analysed a design representation using one of the two visualisation technologies: immersive virtual reality (IVR) and non-immersive virtual reality (nIVR). The results show that the expertise in the used technology and the expertise in the design domain significantly affect design understanding. On the other hand, the effect of contextual expertise was not found significant. Spatial ability affected design understanding in nIVR but not in the IVR. Visualisation technology did not have an overall effect on understanding, but IVR helped students with lower expertise to understand specific aspects of a design better (e.g. rotation-based mechanisms). The study suggests that researchers and educators control the students’ expertise when assessing the effect of technology on design education. Overall, the results support the constructivist learning theory, as IVR can support context-dependent and context-independent understanding.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Multi-stage attention spatial-temporal graph networks for traffic prediction Accurate traffic prediction plays an important role in Intelligent Transportation System. This problem is very challenging due to the heterogeneity and dynamic spatio-temporal dependence of large-scale traffic data. Existing models often suffer two limitations: (1) They usually only consider one type of data in the input, or simply treat other collected time series data as features, ignoring the non-linear interactions among different series. In fact, heterogeneous data at a specific location has direct impacts on the predicted series. (2) The method based on graph convolutional network uses a fixed Laplacian matrix to model spatial correlation, without considering its dynamics. The aggregations also occur only in the neighborhood, making it difficult to capture long-range dependencies. In this paper, we propose a Multi-Stage Attention Spatial-Temporal Graph Networks (MASTGN). First, an internal attention mechanism is designed to capture the interactions among multiple time series collected by the same sensor. Second, to model the complex spatial correlations, we apply a dynamic neighborhood-based attention mechanism. Unlike the general attention-based methods that ignore the structure information of the road network, we use the adjacency relations as a prior to divide the nodes of a road network into different neighborhood sets. In this way, attention can capture spatial correlations both within the same order neighborhood, and among different neighborhoods dynamically. Furthermore, a temporal attention mechanism is used to extract the dynamic temporal dependencies. Experiments are conducted on two real traffic datasets, and the results verify the effectiveness of the proposed model.
Forecasting holiday daily tourist flow based on seasonal support vector regression with adaptive genetic algorithm. •The model of support vector regression with adaptive genetic algorithm and the seasonal mechanism is proposed.•Parameters selection and seasonal adjustment should be carefully selected.•We focus on latest and representative holiday daily data in China.•Two experiments are used to prove the effect of the model.•The AGASSVR is superior to AGA-SVR and BPNN.
Regression conformal prediction with random forests Regression conformal prediction produces prediction intervals that are valid, i.e., the probability of excluding the correct target value is bounded by a predefined confidence level. The most important criterion when comparing conformal regressors is efficiency; the prediction intervals should be as tight (informative) as possible. In this study, the use of random forests as the underlying model for regression conformal prediction is investigated and compared to existing state-of-the-art techniques, which are based on neural networks and k-nearest neighbors. In addition to their robust predictive performance, random forests allow for determining the size of the prediction intervals by using out-of-bag estimates instead of requiring a separate calibration set. An extensive empirical investigation, using 33 publicly available data sets, was undertaken to compare the use of random forests to existing state-of-the-art conformal predictors. The results show that the suggested approach, on almost all confidence levels and using both standard and normalized nonconformity functions, produced significantly more efficient conformal predictors than the existing alternatives.
Learning to Predict Bus Arrival Time From Heterogeneous Measurements via Recurrent Neural Network Bus arrival time prediction intends to improve the level of the services provided by transportation agencies. Intuitively, many stochastic factors affect the predictability of the arrival time, e.g., weather and local events. Moreover, the arrival time prediction for a current station is closely correlated with that of multiple passed stations. Motivated by the observations above, this paper propo...
Hybrid Spatio-Temporal Graph Convolutional Network: Improving Traffic Prediction with Navigation Data Traffic forecasting has recently attracted increasing interest due to the popularity of online navigation services, ridesharing and smart city projects. Owing to the non-stationary nature of road traffic, forecasting accuracy is fundamentally limited by the lack of contextual information. To address this issue, we propose the Hybrid Spatio-Temporal Graph Convolutional Network (H-STGCN), which is able to "deduce" future travel time by exploiting the data of upcoming traffic volume. Specifically, we propose an algorithm to acquire the upcoming traffic volume from an online navigation engine. Taking advantage of the piecewise-linear flow-density relationship, a novel transformer structure converts the upcoming volume into its equivalent in travel time. We combine this signal with the commonly-utilized travel-time signal, and then apply graph convolution to capture the spatial dependency. Particularly, we construct a compound adjacency matrix which reflects the innate traffic proximity. We conduct extensive experiments on real-world datasets. The results show that H-STGCN remarkably outperforms state-of-the-art methods in various metrics, especially for the prediction of non-recurring congestion.
Long-Term Traffic Speed Prediction Based on Multiscale Spatio-Temporal Feature Learning Network Speed plays a significant role in evaluating the evolution of traffic status, and predicting speed is one of the fundamental tasks for the intelligent transportation system. There exists a large number of works on speed forecast; however, the problem of long-term prediction for the next day is still not well addressed. In this paper, we propose a multiscale spatio-temporal feature learning network (MSTFLN) as the model to handle the challenging task of long-term traffic speed prediction for elevated highways. Raw traffic speed data collected from loop detectors every 5 min are transformed into spatial-temporal matrices; each matrix represents the one-day speed information, rows of the matrix indicate the numbers of loop detectors, and time intervals are denoted by columns. To predict the traffic speed of a certain day, nine speed matrices of three historical days with three different time scales are served as the input of MSTFLN. The proposed MSTFLN model consists of convolutional long short-term memories and convolutional neural networks. Experiments are evaluated using the data of three main elevated highways in Shanghai, China. The presented results demonstrate that our approach outperforms the state-of-the-art work and it can effectively predict the long-term speed information.
Forecasting Short-Term Passenger Flow: An Empirical Study on Shenzhen Metro Forecasting short-term traffic flow has been a critical topic in transportation research for decades, which aims to facilitate dynamic traffic control proactively by monitoring the present traffic and foreseeing its immediate future. In this paper, we focus on forecasting short-term passenger flow at subway stations by utilizing the data collected through an automatic fare collection (AFC) system along with various external factors, where passenger flow refers to the volume of arrivals at stations during a given period of time. Along this line, we propose a data-driven three-stage framework for short-term passenger flow forecasting, consisting of traffic data profiling, feature extraction, and predictive modeling. We investigate the effect of temporal and spatial features as well as external weather influence on passenger flow forecasting. Various forecasting models, including the time series model auto-regressive integrated moving average, linear regression, and support vector regression, are employed for evaluating the performance of the proposed framework. Moreover, using a real data set collected from the Shenzhen AFC system, we conduct extensive experiments for methods validation, feature evaluation, and data resolution demonstration.
Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction tasks and often neglect spatial and temporal dependencies. In this paper, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Networks (STGCN), to tackle the time series prediction problem in traffic domain. Instead of applying regular convolutional and recurrent units, we formulate the problem on graphs and build the model with complete convolutional structures, which enable much faster training speed with fewer parameters. Experiments show that our model STGCN effectively captures comprehensive spatio-temporal correlations through modeling multi-scale traffic networks and consistently outperforms state-of-the-art baselines on various real-world traffic datasets.
An online mechanism for multi-unit demand and its application to plug-in hybrid electric vehicle charging We develop an online mechanism for the allocation of an expiring resource to a dynamic agent population. Each agent has a non-increasing marginal valuation function for the resource, and an upper limit on the number of units that can be allocated in any period. We propose two versions on a truthful allocation mechanism. Each modifies the decisions of a greedy online assignment algorithm by sometimes cancelling an allocation of resources. One version makes this modification immediately upon an allocation decision while a second waits until the point at which an agent departs the market. Adopting a prior-free framework, we show that the second approach has better worst-case allocative efficiency and is more scalable. On the other hand, the first approach (with immediate cancellation) may be easier in practice because it does not need to reclaim units previously allocated. We consider an application to recharging plug-in hybrid electric vehicles (PHEVs). Using data from a real-world trial of PHEVs in the UK, we demonstrate higher system performance than a fixed price system, performance comparable with a standard, but non-truthful scheduling heuristic, and the ability to support 50% more vehicles at the same fuel cost than a simple randomized policy.
Flocking in Fixed and Switching Networks This note analyzes the stability properties of a group of mobile agents that align their velocity vectors, and stabilize their inter-agent distances, using decentralized, nearest-neighbor interaction rules, exchanging information over networks that change arbitrarily (no dwell time between consecutive switches). These changes introduce discontinuities in the agent control laws. To accommodate for arbitrary switching in the topology of the network of agent interactions we employ nonsmooth analysis. The main result is that regardless of switching, convergence to a common velocity vector and stabilization of inter-agent distances is still guaranteed as long as the network remains connected at all times
A robust adaptive nonlinear control design An adaptive control design procedure for a class of nonlinear systems with both parametric uncertainty and unknown nonlinearities is presented. The unknown nonlinearities lie within some 'bounding functions', which are assumed to be partially known. The key assumption is that the uncertain terms satisfy a 'triangularity condition'. As illustrated by examples, the proposed design procedure expands the class of nonlinear systems for which global adaptive stabilization methods can be applied. The overall adaptive scheme is shown to guarantee global uniform ultimate boundedness.
On ear-based human identification in the mid-wave infrared spectrum In this paper the problem of human ear recognition in the Mid-wave infrared (MWIR) spectrum is studied in order to illustrate the advantages and limitations of the ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible (baseline) and mid-wave IR left and right profile face images. Profile face images were collected using a high definition mid-wave IR camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based, ear recognition system is proposed that is designed and developed to perform real-time human identification. The proposed system tests several feature extraction methods, namely: (i) intensity-based such as independent component analysis (ICA), principal component analysis (PCA), and linear discriminant analysis (LDA); (ii) shape-based such as scale invariant feature transform (SIFT); as well as (iii) texture-based such as local binary patterns (LBP), and local ternary patterns (LTP). Experimental results suggest that LTP (followed by LBP) yields the best performance (Rank1=80:68%) on manually segmented ears and (Rank1=68:18%) on ear images that are automatically detected and segmented. By fusing the matching scores obtained by LBP and LTP, the identification performance increases by about 5%. Although these results are promising, the outcomes of our study suggest that the design and development of automated ear-based recognition systems that can operate efficiently in the lower part of the passive IR spectrum are very challenging tasks.
Massive MIMO Antenna Selection: Switching Architectures, Capacity Bounds, and Optimal Antenna Selection Algorithms. Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, w...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.068889
0.073333
0.073333
0.073333
0.073333
0.073333
0.036667
0.004641
0
0
0
0
0
0
Blurred image region detection and classification Many digital images contain blurred regions which are caused by motion or defocus. Automatic detection and classification of blurred image regions are very important for different multimedia analyzing tasks. This paper presents a simple and effective automatic image blurred region detection and classification technique. In the proposed technique, blurred image regions are first detected by examining singular value information for each image pixels. The blur types (i.e. motion blur or defocus blur) are then determined based on certain alpha channel constraint that requires neither image deblurring nor blur kernel estimation. Extensive experiments have been conducted over a dataset that consists of 200 blurred image regions and 200 image regions with no blur that are extracted from 100 digital images. Experimental results show that the proposed technique detects and classifies the two types of image blurs accurately. The proposed technique can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.
Forensics of image blurring and sharpening history based on NSCT domain Detection of multi-manipulated image has always been a more realistic direction for digital image forensic technologies, which extremely attracts interests of researchers. However, mutual affects of manipulations make it difficult to identify the process using existing single-manipulated detection methods. In this paper, a novel algorithm for detecting image manipulation history of blurring and sharpening is proposed based on non-subsampled contourlet transform (NSCT) domain. Two main sets of features are extracted from the NSCT domain: extremum feature and local directional similarity vector. Extremum feature includes multiple maximums and minimums of NSCT coefficients through every scale. Under the influence of blurring or sharpening manipulation, the extremum feature tends to gain ideal discrimination. Directional similarity feature represents the correlation of a pixel and its neighbors, which can also be altered by blurring or sharpening. For one pixel, the directional vector is composed of the coefficients from every directional subband at a certain scale. Local directional similarity vector is obtained through similarity calculation between the directional vector of one random selected pixel and the directional vectors of its 8-neighborhood pixels. With the proposed features, we are able to detect two particular operations and determine the processing order at the same time. Experiment results manifest that the proposed algorithm is effective and accurate.
Scalable Processing History Detector for JPEG Images.
Perceptual image hashing via dual-cross pattern encoding and salient structure detection. •A robust image hashing scheme based on texture and structure features is proposed.•Textural features are extracted from DCP-coded maps through histogram composition.•Structural features are extracted from sampled blocks with the richest corner points.•Final hash can be acquired after data compression for texture-structure features.•Our scheme has better performances of robustness and discrimination simultaneously.
Robust Median Filtering Forensics Using an Autoregressive Model In order to verify the authenticity of digital images, researchers have begun developing digital forensic techniques to identify image editing. One editing operation that has recently received increased attention is median filtering. While several median filtering detection techniques have recently been developed, their performance is degraded by JPEG compression. These techniques suffer similar degradations in performance when a small window of the image is analyzed, as is done in localized filtering or cut-and-paste detection, rather than the image as a whole. In this paper, we propose a new, robust median filtering forensic technique. It operates by analyzing the statistical properties of the median filter residual (MFR), which we define as the difference between an image in question and a median filtered version of itself. To capture the statistical properties of the MFR, we fit it to an autoregressive (AR) model. We then use the AR coefficients as features for median filter detection. We test the effectiveness of our proposed median filter detection techniques through a series of experiments. These results show that our proposed forensic technique can achieve important performance gains over existing methods, particularly at low false-positive rates, with a very small dimension of features.
Segmentation-Based Image Copy-Move Forgery Detection Scheme In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.
Passive detection of doctored JPEG image via block artifact grid extraction It has been noticed that the block artifact grids (BAG), caused by the blocking processing during JPEG compression, are usually mismatched when interpolating or concealing objects by copy-paste operations. In this paper, the BAGs are extracted blindly with a new extraction algorithm, and then abnormal BAGs can be detected with a marking procedure. Then the phenomenon of grid mismatch or grid blank can be taken as a trail of such forensics. Experimental results show that our method can mark these trails efficiently.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
No free lunch theorems for optimization A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms
Simultaneous localization and mapping: part I he simultaneous localization and mapping (SLAM) problem asks if it is possible for a mobile robot to be placed at an unknown location in an unknown envi- ronment and for the robot to incrementally build a consistent map of this environment while simultaneously determining its location within this map. A solution to the SLAM problem has been seen as a "holy grail" for the mobile robotics com- munity as it would provide the means to make a robot truly autonomous. The "solution" of the SLAM problem has been one of the notable successes of the robotics community over the past decade. SLAM has been formulated and solved as a theoretical problem in a number of different forms. SLAM has also been implemented in a number of different domains from indoor robots to outdoor, underwater, and airborne systems. At a theoretical and conceptual level, SLAM can now be consid- ered a solved problem. However, substantial issues remain in practically realizing more general SLAM solutions and notably in building and using perceptually rich maps as part of a SLAM algorithm. This two-part tutorial and survey of SLAM aims to provide a broad introduction to this rapidly growing field. Part I (this article) begins by providing a brief history of early develop- ments in SLAM. The formulation section introduces the struc- ture the SLAM problem in now standard Bayesian form, and explains the evolution of the SLAM process. The solution sec- tion describes the two key computational solutions to the SLAM problem through the use of the extended Kalman filter (EKF-SLAM) and through the use of Rao-Blackwellized par- ticle filters (FastSLAM). Other recent solutions to the SLAM problem are discussed in Part II of this tutorial. The application section describes a number of important real-world implemen- tations of SLAM and also highlights implementations where the sensor data and software are freely down-loadable for other researchers to study. Part II of this tutorial describes major issues in computation, convergence, and data association in SLAM. These are subjects that have been the main focus of the SLAM research community over the past five years.
RFID-based techniques for human-activity detection The iBracelet and the Wireless Identification and Sensing Platform promise the ability to infer human activity directly from sensor readings.
NETWRAP: An NDN Based Real-TimeWireless Recharging Framework for Wireless Sensor Networks Using vehicles equipped with wireless energy transmission technology to recharge sensor nodes over the air is a game-changer for traditional wireless sensor networks. The recharging policy regarding when to recharge which sensor nodes critically impacts the network performance. So far only a few works have studied such recharging policy for the case of using a single vehicle. In this paper, we propose NETWRAP, an N DN based Real Time Wireless Rech arging Protocol for dynamic wireless recharging in sensor networks. The real-time recharging framework supports single or multiple mobile vehicles. Employing multiple mobile vehicles provides more scalability and robustness. To efficiently deliver sensor energy status information to vehicles in real-time, we leverage concepts and mechanisms from named data networking (NDN) and design energy monitoring and reporting protocols. We derive theoretical results on the energy neutral condition and the minimum number of mobile vehicles required for perpetual network operations. Then we study how to minimize the total traveling cost of vehicles while guaranteeing all the sensor nodes can be recharged before their batteries deplete. We formulate the recharge optimization problem into a Multiple Traveling Salesman Problem with Deadlines (m-TSP with Deadlines), which is NP-hard. To accommodate the dynamic nature of node energy conditions with low overhead, we present an algorithm that selects the node with the minimum weighted sum of traveling time and residual lifetime. Our scheme not only improves network scalability but also ensures the perpetual operation of networks. Extensive simulation results demonstrate the effectiveness and efficiency of the proposed design. The results also validate the correctness of the theoretical analysis and show significant improvements that cut the number of nonfunctional nodes by half compared to the static scheme while maintaining the network overhead at the same level.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.109121
0.106667
0.106667
0.106667
0.053333
0.018236
0.003026
0.000044
0
0
0
0
0
0
Human-Cyber-Physical Systems: Concepts, Challenges, And Research Opportunities In this perspective article, we first recall the historic background of human-cyber-physical systems (HCPSs), and then introduce and clarify important concepts. We discuss the key challenges in establishing the scientific foundation from a system engineering point of view, including (1) complex heterogeneity, (2) lack of appropriate abstractions, (3) dynamic black-box integration of heterogeneous systems, (4) complex requirements for functionalities, performance, and quality of services, and (5) design, implementation, and maintenance of HCPS to meet requirements. Then we propose four research directions to tackle the challenges, including (1) abstractions and computational theory of HCPS, (2) theories and methods of HCPS architecture modelling, (3) specification and verification of model properties, and (4) software-defined HCPS. The article also serves as the editorial of this special section on cyber-physical systems and summarises the four articles included in this special section.
Analysing user physiological responses for affective video summarisation. Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.
On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes Driver behavioral cues may present a rich source of information and feedback for future intelligent advanced driver-assistance systems (ADASs). With the design of a simple and robust ADAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. Eye gaze may provide a more accurate proxy than head movement for determining driver attention, whereas the measurement of head motion is less cumbersome and more reliable in harsh driving conditions. We use a lane-change intent-prediction system (McCall et al., 2007) to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane-change maneuver at a particular point in the future. Quantitative results from a naturalistic driving study are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction. The addition of eye gaze does not improve performance as much as simpler head dynamics cues. The advantage of head data over eye data is shown to be statistically significant (p
Detection of Driver Fatigue Caused by Sleep Deprivation This paper aims to provide reliable indications of driver drowsiness based on the characteristics of driver-vehicle interaction. A test bed was built under a simulated driving environment, and a total of 12 subjects participated in two experiment sessions requiring different levels of sleep (partial sleep-deprivation versus no sleep-deprivation) before the experiment. The performance of the subjects was analyzed in a series of stimulus-response and routine driving tasks, which revealed the performance differences of drivers under different sleep-deprivation levels. The experiments further demonstrated that sleep deprivation had greater effect on rule-based than on skill-based cognitive functions: when drivers were sleep-deprived, their performance of responding to unexpected disturbances degraded, while they were robust enough to continue the routine driving tasks such as lane tracking, vehicle following, and lane changing. In addition, we presented both qualitative and quantitative guidelines for designing drowsy-driver detection systems in a probabilistic framework based on the paradigm of Bayesian networks. Temporal aspects of drowsiness and individual differences of subjects were addressed in the framework.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
3D separable convolutional neural network for dynamic hand gesture recognition. •The Frame Difference method is used to pre-process the input in order to filter the background.•A 3D separable CNN is proposed for dynamic gesture recognition. The standard 3D convolution process is decomposed into two processes: 3D depth-wise and 3D point-wise.•By the application of skip connection and layer-wise learning rate, the undesirable gradient dispersion due to the separation operation is solved and the performance of the network is improved.•A dynamic hand gesture library is built through HoloLens.
Deep convolutional neural network-based Bernoulli heatmap for head pose estimation Head pose estimation is a crucial problem for many tasks, such as driver attention, fatigue detection, and human behaviour analysis. It is well known that neural networks are better at handling classification problems than regression problems. It is an extremely nonlinear process to let the network output the angle value directly for optimization learning, and the weight constraint of the loss function will be relatively weak. This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image. Our method can achieve the positioning of the head area while estimating the angles of the head. The Bernoulli heatmap makes it possible to construct fully convolutional neural networks without fully connected layers and provides a new idea for the output form of head pose estimation. A deep convolutional neural network (CNN) structure with multiscale representations is adopted to maintain high-resolution information and low-resolution information in parallel. This kind of structure can maintain rich, high-resolution representations. In addition, channelwise fusion is adopted to make the fusion weights learnable instead of simple addition with equal weights. As a result, the estimation is spatially more precise and potentially more accurate. The effectiveness of the proposed method is empirically demonstrated by comparing it with other state-of-the-art methods on public datasets.
Reinforcement learning based data fusion method for multi-sensors In order to improve detection system robustness and reliability, multi-sensors fusion is used in modern air combat. In this paper, a data fusion method based on reinforcement learning is developed for multi-sensors. Initially, the cubic B-spline interpolation is used to solve time alignment problems of multisource data. Then, the reinforcement learning based data fusion (RLBDF) method is proposed to obtain the fusion results. With the case that the priori knowledge of target is obtained, the fusion accuracy reinforcement is realized by the error between fused value and actual value. Furthermore, the Fisher information is instead used as the reward if the priori knowledge is unable to be obtained. Simulations results verify that the developed method is feasible and effective for the multi-sensors data fusion in air combat.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning The literature on short-term traffic flow forecasting has undergone great development recently. Many works, describing a wide variety of different approaches, which very often share similar features and ideas, have been published. However, publications presenting new prediction algorithms usually employ different settings, data sets, and performance measurements, making it difficult to infer a clear picture of the advantages and limitations of each model. The aim of this paper is twofold. First, we review existing approaches to short-term traffic flow forecasting methods under the common view of probabilistic graphical models, presenting an extensive experimental comparison, which proposes a common baseline for their performance analysis and provides the infrastructure to operate on a publicly available data set. Second, we present two new support vector regression models, which are specifically devised to benefit from typical traffic flow seasonality and are shown to represent an interesting compromise between prediction accuracy and computational efficiency. The SARIMA model coupled with a Kalman filter is the most accurate model; however, the proposed seasonal support vector regressor turns out to be highly competitive when performing forecasts during the most congested periods.
TSCA: A Temporal-Spatial Real-Time Charging Scheduling Algorithm for On-Demand Architecture in Wireless Rechargeable Sensor Networks. The collaborative charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles (WCVs) to sensors, providing a new paradigm to prolong network lifetime. Existing techniques on collaborative charging usually take the periodical and deterministic approach, but neglect influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for large-scale WRSNs. In this paper, we develop a temporal-spatial charging scheduling algorithm, namely TSCA, for the on-demand charging architecture. We aim to minimize the number of dead nodes while maximizing energy efficiency to prolong network lifetime. First, after gathering charging requests, a WCV will compute a feasible movement solution. A basic path planning algorithm is then introduced to adjust the charging order for better efficiency. Furthermore, optimizations are made in a global level. Then, a node deletion algorithm is developed to remove low efficient charging nodes. Lastly, a node insertion algorithm is executed to avoid the death of abandoned nodes. Extensive simulations show that, compared with state-of-the-art charging scheduling algorithms, our scheme can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.
A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
A 3D CNN-LSTM-Based Image-to-Image Foreground Segmentation The video-based separation of foreground (FG) and background (BG) has been widely studied due to its vital role in many applications, including intelligent transportation and video surveillance. Most of the existing algorithms are based on traditional computer vision techniques that perform pixel-level processing assuming that FG and BG possess distinct visual characteristics. Recently, state-of-the-art solutions exploit deep learning models targeted originally for image classification. Major drawbacks of such a strategy are the lacking delineation of FG regions due to missing temporal information as they segment the FG based on a single frame object detection strategy. To grapple with this issue, we excogitate a 3D convolutional neural network (3D CNN) with long short-term memory (LSTM) pipelines that harness seminal ideas, viz., fully convolutional networking, 3D transpose convolution, and residual feature flows. Thence, an FG-BG segmenter is implemented in an encoder-decoder fashion and trained on representative FG-BG segments. The model devises a strategy called double encoding and slow decoding, which fuses the learned spatio-temporal cues with appropriate feature maps both in the down-sampling and up-sampling paths for achieving well generalized FG object representation. Finally, from the Sigmoid confidence map generated by the 3D CNN-LSTM model, the FG is identified automatically by using Nobuyuki Otsu’s method and an empirical global threshold. The analysis of experimental results via standard quantitative metrics on 16 benchmark datasets including both indoor and outdoor scenes validates that the proposed 3D CNN-LSTM achieves competitive performance in terms of figure of merit evaluated against prior and state-of-the-art methods. Besides, a failure analysis is conducted on 20 video sequences from the DAVIS 2016 dataset.
Estimation of prediction error by using K-fold cross-validation Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K-fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training error has a downward bias and K-fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates. In this paper, we investigate two families that connect the training error and K-fold cross-validation.
Traffic Flow Forecasting for Urban Work Zones None of numerous existing traffic flow forecasting models focus on work zones. Work zone events create conditions that are different from both normal operating conditions and incident conditions. In this paper, four models were developed for forecasting traffic flow for planned work zone events. The four models are random forest, regression tree, multilayer feedforward neural network, and nonparametric regression. Both long-term and short-term traffic flow forecasting applications were investigated. Long-term forecast involves forecasting 24 h in advance using historical traffic data, and short-term forecasts involves forecasting 1 h and 45, 30, and 15 min in advance using real-time temporal and spatial traffic data. Models were evaluated using data from work zone events on two types of roadways, a freeway, i.e., I-270, and a signalized arterial, i.e., MO-141, in St. Louis, MO, USA. The results showed that the random forest model yielded the most accurate long-term and short-term work zone traffic flow forecasts. For freeway data, the most influential variables were the latest interval's look-back traffic flows at the upstream, downstream, and current locations. For arterial data, the most influential variables were the traffic flows from the three look-back intervals at the current location only.
Daily long-term traffic flow forecasting based on a deep neural network. •A new deep learning algorithm to predict daily long-term traffic flow data using contextual factors.•Deep neutral network to mine the relationship between traffic flow data and contextual factors.•Advanced batch training can effectively improve convergence of the training process.
Cooperative Traffic Signal Control with Traffic Flow Prediction in Multi-Intersection. As traffic congestion in cities becomes serious, intelligent traffic signal control has been actively studied. Deep Q-Network (DQN), a representative deep reinforcement learning algorithm, is applied to various domains from fully-observable game environment to traffic signal control. Due to the effective performance of DQN, deep reinforcement learning has improved speeds and various DQN extensions have been introduced. However, most traffic signal control researches were performed at a single intersection, and because of the use of virtual simulators, there are limitations that do not take into account variables that affect actual traffic conditions. In this paper, we propose a cooperative traffic signal control with traffic flow prediction (TFP-CTSC) for a multi-intersection. A traffic flow prediction model predicts future traffic state and considers the variables that affect actual traffic conditions. In addition, for cooperative traffic signal control in multi-intersection, each intersection is modeled as an agent, and each agent is trained to take best action by receiving traffic states from the road environment. To deal with multi-intersection efficiently, agents share their traffic information with other adjacent intersections. In the experiment, TFP-CTSC is compared with existing traffic signal control algorithms in a 4 x 4 intersection environment. We verify our traffic flow prediction and cooperative method.
Improving Traffic Flow Prediction With Weather Information in Connected Cars: A Deep Learning Approach. Transportation systems might be heavily affected by factors such as accidents and weather. Specifically, inclement weather conditions may have a drastic impact on travel time and traffic flow. This study has two objectives: first, to investigate a correlation between weather parameters and traffic flow and, second, to improve traffic flow prediction by proposing a novel holistic architecture. It i...
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Well-Solvable Special Cases of the Traveling Salesman Problem: A Survey. The traveling salesman problem (TSP) belongs to the most basic, most important, and most investigated problems in combinatorial optimization. Although it is an ${\cal NP}$-hard problem, many of its special cases can be solved efficiently in polynomial time. We survey these special cases with emphasis on the results that have been obtained during the decade 1985--1995. This survey complements an earlier survey from 1985 compiled by Gilmore, Lawler, and Shmoys [The Traveling Salesman Problem---A Guided Tour of Combinatorial Optimization, Wiley, Chichester, pp. 87--143].
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
0
0
Geometric deep learning: going beyond Euclidean data. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains, such as graphs and manifolds. The purpose of this article is to overview different examples of geometric deep-learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field.
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Dest-ResNet: A Deep Spatiotemporal Residual Network for Hotspot Traffic Speed Prediction. With the ever-increasing urbanization process, the traffic jam has become a common problem in the metropolises around the world, making the traffic speed prediction a crucial and fundamental task. This task is difficult due to the dynamic and intrinsic complexity of the traffic environment in urban cities, yet the emergence of crowd map query data sheds new light on it. In general, a burst of crowd map queries for the same destination in a short duration (called "hotspot'') could lead to traffic congestion. For example, queries of the Capital Gym burst on weekend evenings lead to traffic jams around the gym. However, unleashing the power of crowd map queries is challenging due to the innate spatiotemporal characteristics of the crowd queries. To bridge the gap, this paper firstly discovers hotspots underlying crowd map queries. These discovered hotspots address the spatiotemporal variations. Then Dest-ResNet (Deep spatiotemporal Residual Network) is proposed for hotspot traffic speed prediction. Dest-ResNet is a sequence learning framework that jointly deals with two sequences in different modalities, i.e., the traffic speed sequence and the query sequence. The main idea of Dest-ResNet is to learn to explain and amend the errors caused when the unimodal information is applied individually. In this way, Dest-ResNet addresses the temporal causal correlation between queries and the traffic speed. As a result, Dest-ResNet shows a 30% relative boost over the state-of-the-art methods on real-world datasets from Baidu Map.
Deep Autoencoder Neural Networks for Short-Term Traffic Congestion Prediction of Transportation Networks. Traffic congestion prediction is critical for implementing intelligent transportation systems for improving the efficiency and capacity of transportation networks. However, despite its importance, traffic congestion prediction is severely less investigated compared to traffic flow prediction, which is partially due to the severe lack of large-scale high-quality traffic congestion data and advanced algorithms. This paper proposes an accessible and general workflow to acquire large-scale traffic congestion data and to create traffic congestion datasets based on image analysis. With this workflow we create a dataset named Seattle Area Traffic Congestion Status (SATCS) based on traffic congestion map snapshots from a publicly available online traffic service provider Washington State Department of Transportation. We then propose a deep autoencoder-based neural network model with symmetrical layers for the encoder and the decoder to learn temporal correlations of a transportation network and predicting traffic congestion. Our experimental results on the SATCS dataset show that the proposed DCPN model can efficiently and effectively learn temporal relationships of congestion levels of the transportation network for traffic congestion forecasting. Our method outperforms two other state-of-the-art neural network models in prediction performance, generalization capability, and computation efficiency.
A survey on machine learning for data fusion. •We sum up a group of main challenges that data fusion might face.•We propose a thorough list of requirements to evaluate data fusion methods.•We review the literature of data fusion based on machine learning.•We comment on how a machine learning method can ameliorate fusion performance.•We present significant open issues and valuable future research directions.
Flow Prediction in Spatio-Temporal Networks Based on Multitask Deep Learning. Predicting flows (e.g., the traffic of vehicles, crowds, and bikes), consisting of the in-out traffic at a node and transitions between different nodes, in a spatio-temporal network plays an important role in transportation systems. However, this is a very challenging problem, affected by multiple complex factors, such as the spatial correlation between different locations, temporal correlation am...
An online mechanism for multi-unit demand and its application to plug-in hybrid electric vehicle charging We develop an online mechanism for the allocation of an expiring resource to a dynamic agent population. Each agent has a non-increasing marginal valuation function for the resource, and an upper limit on the number of units that can be allocated in any period. We propose two versions on a truthful allocation mechanism. Each modifies the decisions of a greedy online assignment algorithm by sometimes cancelling an allocation of resources. One version makes this modification immediately upon an allocation decision while a second waits until the point at which an agent departs the market. Adopting a prior-free framework, we show that the second approach has better worst-case allocative efficiency and is more scalable. On the other hand, the first approach (with immediate cancellation) may be easier in practice because it does not need to reclaim units previously allocated. We consider an application to recharging plug-in hybrid electric vehicles (PHEVs). Using data from a real-world trial of PHEVs in the UK, we demonstrate higher system performance than a fixed price system, performance comparable with a standard, but non-truthful scheduling heuristic, and the ability to support 50% more vehicles at the same fuel cost than a simple randomized policy.
Lambertian Reflectance and Linear Subspaces We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.
A Machine Learning Approach to Ranging Error Mitigation for UWB Localization. Location-awareness is becoming increasingly important in wireless networks. Indoor localization can be enabled through wideband or ultra-wide bandwidth (UWB) transmission, due to its fine delay resolution and obstacle-penetration capabilities. A major hurdle is the presence of obstacles that block the line-of-sight (LOS) path between devices, affecting ranging performance and, in turn, localizatio...
Stochastic Power Adaptation with Multiagent Reinforcement Learning for Cognitive Wireless Mesh Networks As the scarce spectrum resource is becoming overcrowded, cognitive radio indicates great flexibility to improve the spectrum efficiency by opportunistically accessing the authorized frequency bands. One of the critical challenges for operating such radios in a network is how to efficiently allocate transmission powers and frequency resource among the secondary users (SUs) while satisfying the quality-of-service constraints of the primary users. In this paper, we focus on the noncooperative power allocation problem in cognitive wireless mesh networks formed by a number of clusters with the consideration of energy efficiency. Due to the SUs' dynamic and spontaneous properties, the problem is modeled as a stochastic learning process. We first extend the single-agent $(Q)$-learning to a multiuser context, and then propose a conjecture-based multiagent $(Q)$-learning algorithm to achieve the optimal transmission strategies with only private and incomplete information. An intelligent SU performs $(Q)$-function updates based on the conjecture over the other SUs' stochastic behaviors. This learning algorithm provably converges given certain restrictions that arise during the learning procedure. Simulation experiments are used to verify the performance of our algorithm and demonstrate its effectiveness of improving the energy efficiency.
Dynamic Fully Homomorphic encryption-based Merkle Tree for lightweight streaming authenticated data structures. Fully Homomorphic encryption-based Merkle Tree (FHMT) is a novel technique for streaming authenticated data structures (SADS) to achieve the streaming verifiable computation. By leveraging the computing capability of fully homomorphic encryption, FHMT shifts almost all of the computation tasks to the server, reaching nearly no overhead for the client. Therefore, FHMT is an important technique to construct a more efficient lightweight ADS for resource-limited clients. But the typical FHMT cannot support the dynamic scenario very well because it cannot expend freely since its height is fixed. We now present our fully dynamic FHMT construction, which is a construction that is able to authenticate an unbounded number of data elements and improves upon the state-of-the-art in terms of computational overhead. We divided the algorithms of the DFHMT with the following phases: initialization, insertion, tree expansion, query and verification. The DFHMT removes the drawbacks of the static FHMT. In the initialization phase, it is not required for the scale of the tree to be determined, and the scale of the tree can be adaptively expanded during the data-appending phase. This feature is more suitable for streaming data environments. We analyzed the security of the DFHMT, and point out that DFHMT has the same security with FHMT. The storage, communication and computation overhead of DFHMT is also analyzed, the results show that the client uses simple numerical multiplications and additions to replace hash operations, which reduces the computational burden of the client; the length of the authentication path in DFHMT is shorter than FHMT, which reduces storage and communication overhead. The performance of DFHMT was compared with other construction techniques of SADS via some tests, the results show that DFHMT strikes the performance balance between the client and server, which has some performance advantage for lightweight devices.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.11
0.1
0.1
0.1
0.1
0.1
0.1
0.025
0
0
0
0
0
0
Performance Evaluation of Vehicle-Based Mobile Sensor Networks for Traffic Monitoring Vehicle-based sensors can be used for traffic monitoring. These sensors are usually set with long sampling intervals to save communication costs and to avoid network congestion. In this paper, we are interested in understanding the traffic-monitoring performance that we can expect from such vehicle-based mobile sensor networks, despite the incomplete information provided. This is a fundamental pro...
Traveling Salesman Problems with Profits Traveling salesman problems with profits (TSPs with profits) are a generalization of the traveling salesman problem (TSP), where it is not necessary to visit all vertices. A profit is associated with each vertex. The overall goal is the simultaneous optimization of the collected profit and the travel costs. These two optimization criteria appear either in the objective function or as a constraint. In this paper, a classification of TSPs with profits is proposed, and the existing literature is surveyed. Different classes of applications, modeling approaches, and exact or heuristic solution techniques are identified and compared. Conclusions emphasize the interest of this class of problems, with respect to applications as well as theoretical results.
Breath: An Adaptive Protocol for Industrial Control Applications Using Wireless Sensor Networks An energy-efficient, reliable and timely data transmission is essential for Wireless Sensor Networks (WSNs) employed in scenarios where plant information must be available for control applications. To reach a maximum efficiency, cross-layer interaction is a major design paradigm to exploit the complex interaction among the layers of the protocol stack. This is challenging because latency, reliability, and energy are at odds, and resource-constrained nodes support only simple algorithms. In this paper, the novel protocol Breath is proposed for control applications. Breath is designed for WSNs where nodes attached to plants must transmit information via multihop routing to a sink. Breath ensures a desired packet delivery and delay probabilities while minimizing the energy consumption of the network. The protocol is based on randomized routing, medium access control, and duty-cycling jointly optimized for energy efficiency. The design approach relies on a constrained optimization problem, whereby the objective function is the energy consumption and the constraints are the packet reliability and delay. The challenging part is the modeling of the interactions among the layers by simple expressions of adequate accuracy, which are then used for the optimization by in-network processing. The optimal working point of the protocol is achieved by a simple algorithm, which adapts to traffic variations and channel conditions with negligible overhead. The protocol has been implemented and experimentally evaluated on a testbed with off-the-shelf wireless sensor nodes, and it has been compared with a standard IEEE 802.15.4 solution. Analytical and experimental results show that Breath is tunable and meets reliability and delay requirements. Breath exhibits a good distribution of the working load, thus ensuring a long lifetime of the network. Therefore, Breath is a good candidate for efficient, reliable, and timely data gathering for control applications.
Wireless Charger Placement and Power Allocation for Maximizing Charging Quality. Wireless power transfer is a promising technology used to extend the lifetime of, and thus enhance the usability of, energy-hungry battery-powered devices. It enables energy to be wirelessly transmitted from power chargers to energy-receiving devices. Existing studies have mainly focused on maximizing network lifetime, optimizing charging efficiency, minimizing charging delay, etc. In this paper, we consider wireless charging service provision in a two-dimensional target area and focus on optimizing charging quality, where the power of each charger is adjustable. We first consider the charger Placement and Power allocation Problem with Stationary rechargeable devices (SP 3 ): Given a set of stationary devices and a set of candidate locations for placing chargers, find a charger placement and a corresponding power allocation to maximize the charging quality, subject to a power budget. We prove that SP 3 is NP-complete, and propose an approximation algorithm. We also show how to deal with mobile devices (MP 3 ), cost-constrained power reconfiguration (CRP), and optimization with more candidate locations. Extensive simulation results show that, the proposed algorithms perform very closely to the optimum (the gap is no more than 4.5, 4.4, and 5.0 percent of OPT in SP 3 , MP 3 , and CRP, respectively), and outperforms the baseline algorithms.
Near Optimal Charging Scheduling for 3-D Wireless Rechargeable Sensor Networks with Energy Constraints Wireless Rechargeable Sensor Network (WRSN) becomes a hot research issue in recent years owing to the breakthrough of wireless power transfer technology. Most prior arts concentrate on developing scheduling schemes in 2-D networks where mobile chargers are placed on the ground. However, few of them are suitable for 3-D scenarios, making it difficult or even impossible to popularize in practical applications. In this paper, we focus on the problem of charging a 3-D WRSN with an Unmanned Aerial Vehicle (UAV) to maximize charged energy within energy constraints. To deal with the problem, we propose a spatial discretization scheme to obtain a finite feasible charging spot set for UAV in 3-D environment and a temporal discretization scheme to determine charging duration for each charging spot. Then, we transform the problem into a submodular maximization problem with routing constraints, and present a cost-efficient approximation algorithm with a provable approximation ratio of e-1/4e(1-ε) to solve it. Lastly, extensive simulations and test-bed experiments show the superior performance of our algorithm.
Real-Time Signal Quality-Aware ECG Telemetry System for IoT-Based Health Care Monitoring. In this paper, we propose a novel signal quality-aware Internet of Things (IoT)-enabled electrocardiogram (ECG) telemetry system for continuous cardiac health monitoring applications. The proposed quality-aware ECG monitoring system consists of three modules: 1) ECG signal sensing module; 2) automated signal quality assessment (SQA) module; and 3) signal-quality aware (SQAw) ECG analysis and trans...
The Sky Is Not the Limit: LTE for Unmanned Aerial Vehicles. Many use cases of UAVs require beyond visual LOS communications. Mobile networks offer wide-area, high-speed, and secure wireless connectivity, which can enhance control and safety of UAV operations and enable beyond visual LOS use cases. In this article, we share some of our experience in LTE connectivity for low-altitude small UAVs. We first identify the typical airborne connectivity requirement...
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
Theory and Experiment on Formation-Containment Control of Multiple Multirotor Unmanned Aerial Vehicle Systems. Formation-containment control problems for multiple multirotor unmanned aerial vehicle (UAV) systems with directed topologies are studied, where the states of leaders form desired formation and the states of followers converge to the convex hull spanned by those of the leaders. First, formation-containment protocols are constructed based on the neighboring information of UAVs. Then, sufficient con...
Interpolating view and scene motion by dynamic view morphing We introduce the problem of view interpolation for dynamic scenes. Our solution to this problem extends the concept of view morphing and retains the practical advantages of that method. We are specifically concerned with interpolating between two reference views captured at different times, so that there is a missing interval of time between when the views were taken. The synthetic interpolations produced by our algorithm portray one possible physically-valid version of what transpired in the scene during the missing time. It is assumed that each object in the original scene underwent a series of rigid translations. Dynamic view morphing can work with widely-spaced reference views, sparse point correspondences, and uncalibrated cameras. When the camera-to-camera transformation can be determined, the synthetic interpolation will portray scene objects moving along straight-line, constant-velocity trajectories in world space
Secure and privacy preserving keyword searching for cloud storage services Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.
Cost-Effective Authentic and Anonymous Data Sharing with Forward Security Data sharing has never been easier with the advances of cloud computing, and an accurate analysis on the shared data provides an array of benefits to both the society and individuals. Data sharing with a large number of participants must take into account several issues, including efficiency, data integrity and privacy of data owner. Ring signature is a promising candidate to construct an anonymous and authentic data sharing system. It allows a data owner to anonymously authenticate his data which can be put into the cloud for storage or analysis purpose. Yet the costly certificate verification in the traditional public key infrastructure (PKI) setting becomes a bottleneck for this solution to be scalable. Identity-based (ID-based) ring signature, which eliminates the process of certificate verification, can be used instead. In this paper, we further enhance the security of ID-based ring signature by providing forward security: If a secret key of any user has been compromised, all previous generated signatures that include this user still remain valid. This property is especially important to any large scale data sharing system, as it is impossible to ask all data owners to reauthenticate their data even if a secret key of one single user has been compromised. We provide a concrete and efficient instantiation of our scheme, prove its security and provide an implementation to show its practicality.
An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation. •The proposed watermarking scheme utilized improved discrete wavelet transformation (IDWT) to retrieve the invariant wavelet domain.•The entropy mechanism is used to identify the suitable region for insertion of watermark. This will improve the imperceptibility and robustness of the watermarking procedure.•The scaling factors such as PSNR and NC are considered for evaluation of the proposed method and the Particle Swarm Optimization is employed to optimize the scaling factors.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.102133
0.1
0.1
0.1
0.05
0.002
0.000151
0
0
0
0
0
0
0
A secure and traceable E-DRM system based on mobile device In recent years, intellectual property violation events have caused enterprise to respect digital content protection. Illegal copying digital content abuses become a serious problem. Because the mobile devices are more portable and individualized than personal computers, anyone can access the network resources at anytime from anywhere. However, valuable digital contents without proper protection make the content vulnerable to unauthorized copying, modification and re-distribution, causing revenue losses to service providers. Thus, constructing an effective Digital Right Management (DRM) system has become an important issue. On the basis of the mobile device, we propose an efficient digital rights management protocol. We apply symmetrical cryptosystem, asymmetrical cryptosystem, digital signature and one-way hash function mechanisms in our scheme. To overcome the computing resource weakness problem of mobile devices, we also integrate digital certificate, hardware information and one time password mechanisms such that the security, persistent protection, integrity, authentication, track usage of DRM work, changeable access right, integration and portability issues will be assured. In this way, the mobile user can access the digital content securely in the enterprise via authorization mechanism.
A practical secure and efficient enterprise digital rights management mechanism suitable for mobile environment. Digital rights management (DRM) is a term for access control technologies that are used by hardware manufacturers, publishers, copyright holders, and individuals to limit the use of digital content and devices. Enterprise digital rights management (E-DRM) is the application of DRM technology to prevent illegal users from accessing the confidential data of an enterprise. In 2010, Chang et al. proposed an efficient E-DRM scheme to solve the flaws of Chen's scheme. However, we still found some weaknesses in their scheme. In this article, we propose an improved secure and efficient E-DRM mechanism based on a one-way hash function and exclusive-or. Our mechanism overcomes the weaknesses in the scheme of Chang et al. and also reduces computation costs. In addition, we used BAN logic to show the correctness of our mechanism. Copyright (c) 2012 John Wiley & Sons, Ltd.
A Certificateless Authenticated Key Agreement Protocol for Digital Rights Management System.
A more secure digital rights management authentication scheme based on smart card Digital rights management (DRM) system is a technology based mechanism to ensure only authorized access and legal distribution/consumption of the protected digital content. DRM system deals with the whole lifecycle of the digital content including production, management, distribution and consumption. DRM schemes are effective means for the transfer of digital content and safeguard the intellectual property. Recently, Yang et al. proposed a smart-card based DRM authentication scheme providing mutual authentication and session key establishment among all the participants of the DRM environment. We show that their scheme does not resist threats like smart card attack; fails to provide proper password update facility; and does not follow forward secrecy. To overcome these weaknesses, we propose an improvement of Yang et al.’s scheme. The security of our scheme remains intact even if the smart card of the user is lost. In our scheme, user’s smart card is capable of verifying the correctness of the inputted identity and password and hence contributes to achieve an efficient and user- friendly password update phase. In addition, the session keys established between the participating entities are highly secure by virtue of forward secrecy property. We conduct security analysis and comparison with related schemes to evaluate our improved scheme. During comparison, we also highlight the computational cost/time complexity at the user and the server side in terms of the execution time of various operations. The entire analysis shows that the design of the improved scheme is robust enough for the for DRM environment.
Provably Secure Authenticated Content Key Distribution Framework For Iot-Enabled Enterprise Digital Rights Management Systems The internet of things (IoT) is one of the fastest growing technology, which is helping the enterprise industry. However, it brings greater challenges to privacy and security. Moreover, online piracy of content is an emerging issue. Thus, authorised content access is required. Enterprise digital rights management (E-DRM) systems aim to manage the access of electronic records necessitates establishing a standard set of access requirements, binding those access requirements to the electronic records, and computing the permission criteria. In this chain of control, efficient authenticated access to electronic documents is a crucial task. To address the issues, a protocol is designed. Security of the proposed scheme is proved in the random oracle model. Validation of security is done using 'Automated Validation of Internet Security Protocols and Applications (AVISPA)', which indicates that the protocol is safe. Moreover, the comparative study shows that it has a desirable attribute of efficiency.
An efficient and reliable E-DRM scheme for mobile environments Enterprise Digital Right Management (E-DRM) scheme is a mechanism that protects the confidential information of an enterprise from illegal accesses. In 2008, Chen proposed an E-DRM scheme for mobile devices, and Chen's scheme has low computation costs so it is suitable for mobile environments. However, we find that Chen's scheme is insecure because the symmetric key can be easily computed by an attacker. In addition, tampering with the user's password cannot be discovered by the mobile user. Moreover, there are some redundant computations for user authentication in Chen's scheme. To overcome the above-mentioned flaws, we propose an efficient and reliable E-DRM scheme for mobile environments in this paper. In the proposed scheme, the symmetric key is protected by a one-way hash function so it cannot be directly computed by an attacker. In addition, tampering with the transmitted message can be detected by the mobile users in the proposed scheme. Besides, the proposed scheme has no redundant computation for user authentication. Therefore, the proposed scheme is more efficient and reliable than Chen's scheme.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
An introduction to ROC analysis Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
Priced Oblivious Transfer: How to Sell Digital Goods We consider the question of protecting the privacy of customers buying digital goods. More specifically, our goal is to allow a buyer to purchase digital goods from a vendor without letting the vendor learn what, and to the extent possible also when and how much, it is buying. We propose solutions which allow the buyer, after making an initial deposit, to engage in an unlimited number of priced oblivious-transfer protocols, satisfying the following requirements: As long as the buyer's balance contains sufficient funds, it will successfully retrieve the selected item and its balance will be debited by the item's price. However, the buyer should be unable to retrieve an item whose cost exceeds its remaining balance. The vendor should learn nothing except what must inevitably be learned, namely, the amount of interaction and the initial deposit amount (which imply upper bounds on the quantity and total price of all information obtained by the buyer). In particular, the vendor should be unable to learn what the buyer's current balance is or when it actually runs out of its funds. The technical tools we develop, in the process of solving this problem, seem to be of independent interest. In particular, we present the first one-round (two-pass) protocol for oblivious transfer that does not rely on the random oracle model (a very similar protocol was independently proposed by Naor and Pinkas [21]). This protocol is a special case of a more general "conditional disclosure" methodology, which extends a previous approach from [11] and adapts it to the 2-party setting.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
Wireless Networks with RF Energy Harvesting: A Contemporary Survey Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirements. In this paper, we present a comprehensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RFEHNs according to the network types, i.e., single-hop networks, multi-antenna networks, relay networks, and cognitive radio networks. Finally, we envision some open research directions.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
A Muscle Synergy-Driven ANFIS Approach to Predict Continuous Knee Joint Movement Continuous motion prediction plays a significant role in realizing seamless control of robotic exoskeletons and orthoses. Explicitly modeling the relationship between coordinated muscle activations from surface electromyography (sEMG) and human limb movements provides a new path of sEMG-based human–machine interface. Instead of the numeric features from individual channels, we propose a muscle synergy-driven adaptive network-based fuzzy inference system (ANFIS) approach to predict continuous knee joint movements, in which muscle synergy reflects the motor control information to coordinate muscle activations for performing movements. Four human subjects participated in the experiment while walking at five types of speed: 2.0 km/h, 2.5 km/h, 3.0 km/h, 3.5 km/h, and 4.0 km/h. The study finds that the acquired muscle synergies associate the muscle activations with human joint movements in a low-dimensional space and have been further utilized for predicting knee joint angles. The proposed approach outperformed commonly used numeric features from individual sEMG channels with an average correlation coefficient of 0.92 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$ \pm $</tex-math></inline-formula> 0.05. Results suggest that the correlation between muscle activations and knee joint movements is captured by the muscle synergy-driven ANFIS model and can be utilized for the estimation of continuous joint angles.
1.057129
0.055
0.05
0.05
0.05
0.045839
0
0
0
0
0
0
0
0
Distributed wireless power transfer in sensor networks with multiple Mobile Chargers. We investigate the problem of efficient wireless power transfer in wireless sensor networks. In our approach, special mobile entities (called the Mobile Chargers) traverse the network and wirelessly replenish the energy of sensor nodes. In contrast to most current approaches, we envision methods that are distributed and use limited network information. We propose four new protocols for efficient charging, addressing key issues which we identify, most notably (i) what are good coordination procedures for the Mobile Chargers and (ii) what are good trajectories for the Mobile Chargers. Two of our protocols (DC, DCLK) perform distributed, limited network knowledge coordination and charging, while two others (CC, CCGK) perform centralized, global network knowledge coordination and charging. As detailed simulations demonstrate, one of our distributed protocols outperforms a known state of the art method, while its performance gets quite close to the performance of the powerful centralized global knowledge method.
IoT Elements, Layered Architectures and Security Issues: A Comprehensive Survey. The use of the Internet is growing in this day and age, so another area has developed to use the Internet, called Internet of Things (IoT). It facilitates the machines and objects to communicate, compute and coordinate with each other. It is an enabler for the intelligence affixed to several essential features of the modern world, such as homes, hospitals, buildings, transports and cities. The security and privacy are some of the critical issues related to the wide application of IoT. Therefore, these issues prevent the wide adoption of the IoT. In this paper, we are presenting an overview about different layered architectures of IoT and attacks regarding security from the perspective of layers. In addition, a review of mechanisms that provide solutions to these issues is presented with their limitations. Furthermore, we have suggested a new secure layered architecture of IoT to overcome these issues.
A Multicharger Cooperative Energy Provision Algorithm Based On Density Clustering In The Industrial Internet Of Things Wireless sensor networks (WSNs) are an important core of the Industrial Internet of Things (IIoT). Wireless rechargeable sensor networks (WRSNs) are sensor networks that are charged by mobile chargers (MCs), and can achieve self-sufficiency. Therefore, the development of WRSNs has begun to attract widespread attention in recent years. Most of the existing energy replenishment algorithms for MCs use one or more MCs to serve the whole network in WRSNs. However, a single MC is not suitable for large-scale network environments, and multiple MCs make the network cost too high. Thus, this paper proposes a collaborative charging algorithm based on network density clustering (CCA-NDC) in WRSNs. This algorithm uses the mean-shift algorithm based on density to cluster, and then the mother wireless charger vehicle (MWCV) carries multiple sub wireless charger vehicles (SWCVs) to charge the nodes in each cluster by using a gradient descent optimization algorithm. The experimental results confirm that the proposed algorithm can effectively replenish the energy of the network and make the network more stable.
Dynamic Charging Scheme Problem With Actor–Critic Reinforcement Learning The energy problem is one of the most important challenges in the application of sensor networks. With the development of wireless charging technology and intelligent mobile charger (MC), the energy problem can be solved by the wireless charging strategy. In the practical application of wireless rechargeable sensor networks (WRSNs), the energy consumption rate of nodes is dynamically changed due to many uncertainties, such as the death and different transmission tasks of sensor nodes. However, existing works focus on on-demand schemes, which not fully consider real-time global charging scheduling. In this article, a novel dynamic charging scheme (DCS) in WRSN based on the actor-critic reinforcement learning (ACRL) algorithm is proposed. In the ACRL, we introduce gated recurrent units (GRUs) to capture the relationships of charging actions in time sequence. Using the actor network with one GRU layer, we can pick up an optimal or near-optimal sensor node from candidates as the next charging target more quickly and speed up the training of the model. Meanwhile, we take the tour length and the number of dead nodes as the reward signal. Actor and critic networks are updated by the error criterion function of R and V. Compared with current on-demand charging scheduling algorithms, extensive simulations show that the proposed ACRL algorithm surpasses heuristic algorithms, such as the Greedy, DP, nearest job next with preemption, and TSCA in the average lifetime and tour length, especially against the size and complexity increasing of WRSNs.
Adaptive Wireless Power Transfer in Mobile Ad Hoc Networks. We investigate the interesting impact of mobility on the problem of efficient wireless power transfer in ad hoc networks. We consider a set of mobile agents (consuming energy to perform certain sensing and communication tasks), and a single static charger (with finite energy) which can recharge the agents when they get in its range. In particular, we focus on the problem of efficiently computing the appropriate range of the charger with the goal of prolonging the network lifetime. We first demonstrate (under the realistic assumption of fixed energy supplies) the limitations of any fixed charging range and, therefore, the need for (and power of) a dynamic selection of the charging range, by adapting to the behavior of the mobile agents which is revealed in an online manner. We investigate the complexity of optimizing the selection of such an adaptive charging range, by showing that two simplified offline optimization problems (closely related to the online one) are NP-hard. To effectively address the involved performance trade-offs, we finally present a variety of adaptive heuristics, assuming different levels of agent information regarding their mobility and energy.
A survey on cross-layer solutions for wireless sensor networks Ever since wireless sensor networks (WSNs) have emerged, different optimizations have been proposed to overcome their constraints. Furthermore, the proposal of new applications for WSNs have also created new challenges to be addressed. Cross-layer approaches have proven to be the most efficient optimization techniques for these problems, since they are able to take the behavior of the protocols at each layer into consideration. Thus, this survey proposes to identify the key problems of WSNs and gather available cross-layer solutions for them that have been proposed so far, in order to provide insights on the identification of open issues and provide guidelines for future proposals.
Research on Cost-Balanced Mobile Energy Replenishment Strategy for Wireless Rechargeable Sensor Networks In order to maximize the utilization rate of the Mobile Wireless Chargers (MWCs) and reduce the recharging delay in large-scale Rechargeable Wireless Sensor Networks (WRSNs), a type of <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</underline> ost- <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">B</underline> alanced <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">M</underline> obile <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">E</underline> nergy <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">R</underline> eplenishment <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">S</underline> trategy (CBMERS) is proposed in this paper. Firstly, nodes are assigned into groups according to their remaining lifetime, which ensures that only the ones with lower residual energy are recharged in each time slot. Then, to balance energy consumption among multiple MWCs, the moving distance as well as the power cost of the MWC are taken as constraints to get the optimal trajectory allocation scheme. Moreover, by further adjusting the amount of energy being replenished to some sensor nodes, it ensures that the MWC have enough energy to fulfill the recharging task and return back to the base station. Experiment results show that, compared with the Periodic recharging strategy and the <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</underline> luster based <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">M</underline> ultiple <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</underline> harges <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</underline> oordination algorithm (C-MCC), the proposed method can improve the recharging efficiency of MWCs by about 48.22% and 43.35%, and the average waiting time of nodes is also reduced by about 55.72% and 30.7%, respectively.
Network Under Limited Mobile Devices: A New Technique for Mobile Charging Scheduling With Multiple Sinks. Recently, many studies have investigated scheduling mobile devices to recharge and collect data from sensors in wireless rechargeable sensor networks (WRSNs) such that the network lifetime is prolonged. In reality, because mobile devices are more powerful and expensive than sensors, the cost of the mobile devices often consumes a high portion of the budget. Due to a limited budget, the number of m...
Charge selection algorithms for maximizing sensor network life with UAV-based limited wireless recharging Monitoring bridges with wireless sensor networks aids in detecting failures early, but faces power challenges in ensuring reasonable network lifetimes. Recharging select nodes with Unmanned Aerial Vehicles (UAVs) provides a solution that currently can recharge a single node. However, questions arise on the effectiveness of a limited recharging system, the appropriate node to recharge, and the best sink selection algorithm for improving network lifetime given a limited recharging system. This paper simulates such a network in order to answer those questions. It explores five different sink positioning algorithms to find which provides the longest network lifetime with the added capability of limited recharging. For a range of network sizes, our results show that network lifetime improves by over 350% when recharging a single node in the network, the best node to recharge is the one with the lowest power level, and that either the Greedy Heuristic or LP sink selection algorithms perform equally well.
Optimization Of Radio And Computational Resources For Energy Efficiency In Latency-Constrained Application Offloading Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints.
Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources We investigate unsupervised techniques for acquiring monolingual sentence-level paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of web-based news sources. Two techniques are employed: (1) simple string edit distance, and (2) a heuristic strategy that pairs initial (presumably summary) sentences from different news stories in the same cluster. We evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation. Results show that edit distance data is cleaner and more easily-aligned than the heuristic data, with an overall alignment error rate (AER) of 11.58% on a similarly-extracted test set. On test data extracted by the heuristic strategy, however, performance of the two training sets is similar, with AERs of 13.2% and 14.7% respectively. Analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase. The summary sentences, while less readily alignable, retain more of the non-trivial alternations that are of greatest interest learning paraphrase relationships.
Digital image splicing detection based on Markov features in DCT and DWT domain Image splicing is very common and fundamental in image tampering. To recover people's trust in digital images, the detection of image splicing is in great need. In this paper, a Markov based approach is proposed to detect this specific artifact. Firstly, the original Markov features generated from the transition probability matrices in DCT domain by Shi et al. is expanded to capture not only the intra-block but also the inter-block correlation between block DCT coefficients. Then, more features are constructed in DWT domain to characterize the three kinds of dependency among wavelet coefficients across positions, scales and orientations. After that, feature selection method SVM-RFE is used to fulfill the task of feature reduction, making the computational cost more manageable. Finally, support vector machine (SVM) is exploited to classify the authentic and spliced images using the final dimensionality-reduced feature vector. The experiment results demonstrate that the proposed approach can outperform some state-of-the-art methods.
DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals From Wireless Low-cost Off-the-Shelf Devices. In this paper, we present DREAMER, a multimodal database consisting of electroencephalogram (EEG) and electrocardiogram (ECG) signals recorded during affect elicitation by means of audio-visual stimuli. Signals from 23 participants were recorded along with the participants self-assessment of their affective state after each stimuli, in terms of valence, arousal, and dominance. All the signals were...
Inferring Latent Traffic Demand Offered To An Overloaded Link With Modeling Qos-Degradation Effect In this paper, we propose a CTRIL (Common Trend and Regression with Independent Loss) model to infer latent traffic demand in overloaded links as well as how much it is reduced due to QoS (Quality of Service) degradation. To appropriately provision link bandwidth for such overloaded links, we need to infer how much traffic will increase without QoS degradation. Because original latent traffic demand cannot be observed, we propose a method that compares the other traffic time series of an underloaded link, and by assuming that the latent traffic demands in both overloaded and underloaded are common, and actualized traffic demand in the overloaded link is decreased from common pattern due to the effect of QoS degradation. To realize the method, we developed a CTRIL model on the basis of a state-space model where observed traffic is generated from a latent trend but is decreased by the QoS degradation. By applying the CTRIL model to actual HTTP (Hypertext transfer protocol) traffic and QoS time series data, we reveal that 1% packet loss decreases traffic demand by 12.3%, and the estimated latent traffic demand is larger than the observed one by 23.0%.
1.02463
0.03
0.03
0.03
0.03
0.02
0.02
0.017791
0.007697
0.000001
0
0
0
0
Zero-Bias Deep-Learning-Enabled Quickest Abnormal Event Detection in IoT Abnormal event detection with the lowest latency is an indispensable function for safety-critical systems, such as cyber defense systems. However, as systems become increasingly complicated, conventional sequential event detection methods become less effective, especially when we need to define indicator metrics from complicated data manually. Although deep neural networks (DNNs) have been used to handle heterogeneous data, the theoretic assurability and explainability are still insufficient. This article provides a holistic framework for the quickest and sequential detection of abnormalities and time-dependent abnormal events. We explore the latent space characteristics of zero-bias neural networks considering the classification boundaries and abnormalities. We then provide a novel method to convert zero-bias DNN classifiers into performance-assured binary abnormality detectors. Finally, we provide a sequential quickest detection (QD) scheme that provides the theoretically assured lowest abnormal event detection delay under false alarm constraints using the converted abnormality detector. We verify the effectiveness of the framework using real massive signal records in aviation communication systems and simulation. Codes and data are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/pcwhy/AbnormalityDetectionInZbDNN</uri> .
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Detecting Double JPEG Compressed Color Images With the Same Quantization Matrix in Spherical Coordinates Detection of double Joint Photographic Experts Group (JPEG) compression is an important part of image forensics. Although methods in the past studies have been presented for detecting the double JPEG compression with a different quantization matrix, the detection of double JPEG compression with the same quantization matrix is still a challenging problem. In this paper, an effective method to detect the recompression in the color images by using the conversion error, rounding error, and truncation error on the pixel in the spherical coordinate system is proposed. The randomness of truncation errors, rounding errors, and quantization errors result in random conversion errors. The pixel number of the conversion error is used to extract six-dimensional features. Truncation error and rounding error on the pixel in its three channels are mapped to the spherical coordinate system based on the relation of a color image to the pixel values in the three channels. The former is converted into amplitude and angles to extract 30-dimensional features and 8-dimensional auxiliary features are extracted from the number of special points and special blocks. As a result, a total of 44-dimensional features have been used in the classification by using the support vector machine (SVM) method. Thereafter, the support vector machine recursive feature elimination (SVMRFE) method is used to improve the classification accuracy. The experimental results show that the performance of the proposed method is better than the existing methods.
An Effective Method for Detecting Double JPEG Compression With the Same Quantization Matrix Detection of double JPEG compression plays an important role in digital image forensics. Some successful approaches have been proposed to detect double JPEG compression when the primary and secondary compressions have different quantization matrices. However, detecting double JPEG compression with the same quantization matrix is still a challenging problem. In this paper, an effective error-based statistical feature extraction scheme is presented to solve this problem. First, a given JPEG file is decompressed to form a reconstructed image. An error image is obtained by computing the differences between the inverse discrete cosine transform coefficients and pixel values in the reconstructed image. Two classes of blocks in the error image, namely, rounding error block and truncation error block, are analyzed. Then, a set of features is proposed to characterize the statistical differences of the error blocks between single and double JPEG compressions. Finally, the support vector machine classifier is employed to identify whether a given JPEG image is doubly compressed or not. Experimental results on three image databases with various quality factors have demonstrated that the proposed method can significantly outperform the state-of-the-art method.
Combining spatial and DCT based Markov features for enhanced blind detection of image splicing. Nowadays, it is extremely simple to manipulate the content of digital images without leaving perceptual clues due to the availability of powerful image editing tools. Image tampering can easily devastate the credibility of images as a medium for personal authentication and a record of events. With the daily upload of millions of pictures to the Internet and the move towards paperless workplaces and e-government services, it becomes essential to develop automatic tampering detection techniques with reliable results. This paper proposes an enhanced technique for blind detection of image splicing. It extracts and combines Markov features in spatial and Discrete Cosine Transform domains to detect the artifacts introduced by the tampering operation. To reduce the computational complexity due to high dimensionality, Principal Component Analysis is used to select the most relevant features. Then, an optimized support vector machine with radial-basis function kernel is built to classify the image as being tampered or authentic. The proposed technique is evaluated on a publicly available image splicing dataset using cross validation. The results showed that the proposed technique outperforms the state-of-the-art splicing detection methods.
A Clustering-Based Framework for Improving the Performance of JPEG Quantization Step Estimation Quantization plays a pivotal role in JPEG compression with respect to the tradeoff between image fidelity and storage size, and the blind estimation of quantization parameters has attracted considerable interest in the fields of image steganalysis and forensics. Existing estimation methods have made great progress, but they usually suffer a sharp decline in accuracy when addressing small-size JPEG decompressed bitmaps due to the insufficiency of coefficients. Aiming to alleviate this issue, this paper proposes a generic clustering-based framework to improve the performance of the existing methods. The core idea is to gather as many coefficients as possible by clustering subbands before feeding them into a step estimator. The proposed framework is implemented using hierarchical clustering with two kinds of histogram-like features. Extensive experiments are conducted to validate the effectiveness of the proposed framework on a variety of images of different sizes and quality factors, and the results show that notable improvements can be achieved. In addition to quantization step estimation, we believe the idea behind the proposed framework might provide inspiration for other forensic tasks to alleviate their performance issues induced by sample insufficiency.
Image splicing detection based on convolutional neural network with weight combination strategy With the rapid development of splicing manipulation, more and more negative effects have been brought. Therefore, the demand for image splicing detection algorithms is growing dramatically. In this paper, a new image splicing detection method is proposed which is based on convolutional neural network (CNN) with weight combination strategy. In the proposed method, three types of features are selected to distinguish splicing manipulation including YCbCr features, edge features and photo response non-uniformity (PRNU) features, which are combined according to weight by the combination strategy. Different from the other methods, these weight parameters are automatically adjusted during the CNN training process, until the best ratio is obtained. Experiments show that the proposed method has higher accuracy than the other methods using CNN, and the depth of the CNN in the method proposed is much less than the compared methods.
Adversarial Learning for Constrained Image Splicing Detection and Localization based on Atrous Convolution Constrained image splicing detection and localization (CISDL), which investigates two input suspected images and identifies whether one image has suspected regions pasted from the other, is a newly proposed challenging task for image forensics. In this paper, we propose a novel adversarial learning framework to learn a deep matching network for CISDL. Our framework mainly consists of three building blocks. First, a deep matching network based on atrous convolution (DMAC) aims to generate two high-quality candidate masks, which indicate suspected regions of the two input images. In DMAC, atrous convolution is adopted to extract features with rich spatial information, a correlation layer based on a skip architecture is proposed to capture hierarchical features, and atrous spatial pyramid pooling is constructed to localize tampered regions at multiple scales. Second, a detection network is designed to rectify inconsistencies between the two corresponding candidate masks. Finally, a discriminative network drives the DMAC network to produce masks that are hard to distinguish from ground-truth ones. The detection network and the discriminative network collaboratively supervise the training of DMAC in an adversarial way. Besides, a sliding window-based matching strategy is investigated for high-resolution images matching. Extensive experiments, conducted on five groups of datasets, demonstrate the effectiveness of the proposed framework and the superior performance of DMAC.
Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
A Privacy-Preserving and Copy-Deterrence Content-Based Image Retrieval Scheme in Cloud Computing. With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.
Reciprocal N-body Collision Avoidance In this paper, we present a formal approach to reciprocal n-body collision avoidance, where multiple mobile robots need to avoid collisions with each other while moving in a common workspace. In our formulation, each robot acts fully in- dependently, and does not communicate with other robots. Based on the definition of velocity obstacles (5), we derive sufficient conditions for collision-free motion by reducing the problem to solving a low-dimensional linear program. We test our approach on several dense and complex simulation scenarios involving thousands of robots and compute collision-free actions for all of them in only a few millisec- onds. To the best of our knowledge, this method is the first that can guarantee local collision-free motion for a large number of robots in a cluttered workspace.
Secure and privacy preserving keyword searching for cloud storage services Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.
Collaborative Mobile Charging The limited battery capacity of sensor nodes has become one of the most critical impediments that stunt the deployment of wireless sensor networks (WSNs). Recent breakthroughs in wireless energy transfer and rechargeable lithium batteries provide a promising alternative to power WSNs: mobile vehicles/robots carrying high volume batteries serve as mobile chargers to periodically deliver energy to sensor nodes. In this paper, we consider how to schedule multiple mobile chargers to optimize energy usage effectiveness, such that every sensor will not run out of energy. We introduce a novel charging paradigm, collaborative mobile charging, where mobile chargers are allowed to intentionally transfer energy between themselves. To provide some intuitive insights into the problem structure, we first consider a scenario that satisfies three conditions, and propose a scheduling algorithm, PushWait, which is proven to be optimal and can cover a one-dimensional WSN of infinite length. Then, we remove the conditions one by one, investigating chargers' scheduling in a series of scenarios ranging from the most restricted one to a general 2D WSN. Through theoretical analysis and simulations, we demonstrate the advantages of the proposed algorithms in energy usage effectiveness and charging coverage.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.24
0.24
0.24
0.24
0.24
0.08
0.02
0
0
0
0
0
0
0
What-and-Where to Match: Deep Spatially Multiplicative Integration Networks for Person Re-identification. •A novel deep architecture to emphasize common local patterns is proposed to learn flexible joint representations for person re-identification.•The proposed method introduces a multiplicative integration gating function to embed two convolutional features to their joint representations, which are effective in discriminating positive pairs from negative pairs.•Spatial dependencies are incorporated into feature learning to address the cross-view misalignment.•Extensive experiments and empirical analysis are provided in experimental part.
Attention-based LSTM for Aspect-level Sentiment Classification.
Detection Of Obstructive Sleep Apnoea By Ecg Signals Using Deep Learning Architectures Obstructive Sleep Apnoea (OSA) is a breathing disorder that happens during sleep and general anaesthesia. This disorder can affect human life considerably. Early detection of OSA can protect human health from different diseases including cardiovascular diseases which may lead to sudden death. OSA is examined by physicians using Electrocardiography (ECG) signals, Electromyogram (EMG), Electroencephalogram (EEG), Electrooculography (EOG) and oxygen saturation. Previous studies of detecting OSA are focused on using feature engineering where a specific number of features from ECG signals are selected as an input to the machine learning model. In this study, we focus on detecting OSA from ECG signals where our proposed machine learning methods automatically extract the input as features from ECG signals. We proposed three architectures of deep learning approaches in this study: CNN, CNN with LSTM and CNN with GRU. These architectures utilized consecutive R interval and QRS complex amplitudes as inputs. Thirty-five recordings from PhysioNet Apnea-ECG database have been used to evaluate our models. Experimental results show that our architecture of CNN with LSTM performed best for OSA detection. The average classification accuracy, sensitivity and specificity achieved in this study are 89.11%, 89.91% and 87.78% respectively.
An Automatic Screening Approach for Obstructive Sleep Apnea Diagnosis Based on Single-Lead Electrocardiogram Traditional approaches for obstructive sleep apnea (OSA) diagnosis are apt to using multiple channels of physiological signals to detect apnea events by dividing the signals into equal-length segments, which may lead to incorrect apnea event detection and weaken the performance of OSA diagnosis. This paper proposes an automatic-segmentation-based screening approach with the single channel of Electrocardiogram (ECG) signal for OSA subject diagnosis, and the main work of the proposed approach lies in three aspects: (i) an automatic signal segmentation algorithm is adopted for signal segmentation instead of the equal-length segmentation rule; (ii) a local median filter is improved for reduction of the unexpected RR intervals before signal segmentation; (iii) the designed OSA severity index and additional admission information of OSA suspects are plugged into support vector machine (SVM) for OSA subject diagnosis. A real clinical example from PhysioNet database is provided to validate the proposed approach and an average accuracy of 97.41% for subject diagnosis is obtained which demonstrates the effectiveness for OSA diagnosis.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
New approach using ant colony optimization with ant set partition for fuzzy control design applied to the ball and beam system. In this paper we describe the design of a fuzzy logic controller for the ball and beam system using a modified Ant Colony Optimization (ACO) method for optimizing the type of membership functions, the parameters of the membership functions and the fuzzy rules. This is achieved by applying a systematic and hierarchical optimization approach modifying the conventional ACO algorithm using an ant set partition strategy. The simulation results show that the proposed algorithm achieves better results than the classical ACO algorithm for the design of the fuzzy controller.
Image analogies This paper describes a new framework for processing images by example, called “image analogies.” The framework involves two stages: a design phase, in which a pair of images, with one image purported to be a “filtered” version of the other, is presented as “training data”; and an application phase, in which the learned filter is applied to some new target image in order to create an “analogous” filtered result. Image analogies are based on a simple multi-scale autoregression, inspired primarily by recent results in texture synthesis. By choosing different types of source image pairs as input, the framework supports a wide variety of “image filter” effects, including traditional image filters, such as blurring or embossing; improved texture synthesis, in which some textures are synthesized with higher quality than by previous approaches; super-resolution, in which a higher-resolution image is inferred from a low-resolution source; texture transfer, in which images are “texturized” with some arbitrary source texture; artistic filters, in which various drawing and painting styles are synthesized based on scanned real-world examples; and texture-by-numbers, in which realistic scenes, composed of a variety of textures, are created using a simple painting interface.
Communication in reactive multiagent robotic systems Multiple cooperating robots are able to complete many tasks more quickly and reliably than one robot alone. Communication between the robots can multiply their capabilities and effectiveness, but to what extent? In this research, the importance of communication in robotic societies is investigated through experiments on both simulated and real robots. Performance was measured for three different types of communication for three different tasks. The levels of communication are progressively more complex and potentially more expensive to implement. For some tasks, communication can significantly improve performance, but for others inter-agent communication is apparently unnecessary. In cases where communication helps, the lowest level of communication is almost as effective as the more complex type. The bulk of these results are derived from thousands of simulations run with randomly generated initial conditions. The simulation results help determine appropriate parameters for the reactive control system which was ported for tests on Denning mobile robots.
Gravity-Balancing Leg Orthosis and Its Performance Evaluation In this paper, we propose a device to assist persons with hemiparesis to walk by reducing or eliminating the effects of gravity. The design of the device includes the following features: 1) it is passive, i.e., it does not include motors or actuators, but is only composed of links and springs; 2) it is safe and has a simple patient-machine interface to accommodate variability in geometry and inertia of the subjects. A number of methods have been proposed in the literature to gravity-balance a machine. Here, we use a hybrid method to achieve gravity balancing of a human leg over its range of motion. In the hybrid method, a mechanism is used to first locate the center of mass of the human limb and the orthosis. Springs are then added so that the system is gravity-balanced in every configuration. For a quantitative evaluation of the performance of the device, electromyographic (EMG) data of the key muscles, involved in the motion of the leg, were collected and analyzed. Further experiments involving leg-raising and walking tasks were performed, where data from encoders and force-torque sensors were used to compute joint torques. These experiments were performed on five healthy subjects and a stroke patient. The results showed that the EMG activity from the rectus femoris and hamstring muscles with the device was reduced by 75%, during static hip and knee flexion, respectively. For leg-raising tasks, the average torque for static positioning was reduced by 66.8% at the hip joint and 47.3% at the knee joint; however, if we include the transient portion of the leg-raising task, the average torque at the hip was reduced by 61.3%, and at the knee was increased by 2.7% at the knee joints. In the walking experiment, there was a positive impact on the range of movement at the hip and knee joints, especially for the stroke patient: the range of movement increased by 45% at the hip joint and by 85% at the knee joint. We believe that this orthosis can be potentially used to desig- - n rehabilitation protocols for patients with stroke
Internet of Things for Smart Cities The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.24
0.24
0.24
0.08
0.016
0
0
0
0
0
0
0
0
0
Traveling Salesman Problems with Profits Traveling salesman problems with profits (TSPs with profits) are a generalization of the traveling salesman problem (TSP), where it is not necessary to visit all vertices. A profit is associated with each vertex. The overall goal is the simultaneous optimization of the collected profit and the travel costs. These two optimization criteria appear either in the objective function or as a constraint. In this paper, a classification of TSPs with profits is proposed, and the existing literature is surveyed. Different classes of applications, modeling approaches, and exact or heuristic solution techniques are identified and compared. Conclusions emphasize the interest of this class of problems, with respect to applications as well as theoretical results.
Breath: An Adaptive Protocol for Industrial Control Applications Using Wireless Sensor Networks An energy-efficient, reliable and timely data transmission is essential for Wireless Sensor Networks (WSNs) employed in scenarios where plant information must be available for control applications. To reach a maximum efficiency, cross-layer interaction is a major design paradigm to exploit the complex interaction among the layers of the protocol stack. This is challenging because latency, reliability, and energy are at odds, and resource-constrained nodes support only simple algorithms. In this paper, the novel protocol Breath is proposed for control applications. Breath is designed for WSNs where nodes attached to plants must transmit information via multihop routing to a sink. Breath ensures a desired packet delivery and delay probabilities while minimizing the energy consumption of the network. The protocol is based on randomized routing, medium access control, and duty-cycling jointly optimized for energy efficiency. The design approach relies on a constrained optimization problem, whereby the objective function is the energy consumption and the constraints are the packet reliability and delay. The challenging part is the modeling of the interactions among the layers by simple expressions of adequate accuracy, which are then used for the optimization by in-network processing. The optimal working point of the protocol is achieved by a simple algorithm, which adapts to traffic variations and channel conditions with negligible overhead. The protocol has been implemented and experimentally evaluated on a testbed with off-the-shelf wireless sensor nodes, and it has been compared with a standard IEEE 802.15.4 solution. Analytical and experimental results show that Breath is tunable and meets reliability and delay requirements. Breath exhibits a good distribution of the working load, thus ensuring a long lifetime of the network. Therefore, Breath is a good candidate for efficient, reliable, and timely data gathering for control applications.
Mathematical Evaluation of Environmental Monitoring Estimation Error through Energy-Efficient Wireless Sensor Networks In this paper, the estimation of a scalar field over a bidimensional scenario (e.g., the atmospheric pressure in a wide area) through a self-organizing wireless sensor network (WSN) with energy constraints is investigated. The sensor devices (denoted as nodes) are randomly distributed; they transmit samples to a supervisor by using a clustered network. This paper provides a mathematical framework to analyze the interdependent aspects of WSN communication protocol and signal processing design. Channel modelling and connectivity issues, multiple access control and routing, and the role of distributed digital signal processing (DDSP) techniques are accounted for. The possibility that nodes perform DDSP is studied through a distributed compression technique based on signal resampling. The DDSP impact on network energy efficiency is compared through a novel mathematical approach to the case where the processing is performed entirely by the supervisor. The trade-off between energy conservation (i.e., network lifetime) and estimation error is discussed and a design criterion is proposed as well. Comparison to simulation outcomes validates the model. As an example result, the required node density is found as a trade-off between estimation quality and network lifetime for different system parameters and scalar field characteristics. It is shown that both the DDSP technique and the MAC protocol choice have a relevant impact on the performance of a WSN.
Geometric Analysis of Energy Saving for Directional Charging in WRSNs Wireless power transfer (WPT) enables a reliable and convenient charging paradigm. This article concerns the fundamental issue of energy saving in wireless rechargeable sensor networks (WRSNs), i.e., given a fixed number of rechargeable sensors (RSs) with their locations and charging demands, we focus on a minimal charging expenditure (MAP) problem with directional WPT to decrease the energy expenditure of the charger, on condition that the charging demands of all sensors are satisfied. In particular, we consider the anisotropic energy receiving property of RSs, which is closely related to the distance and the angle between the sensor and the charger antenna's orientation in directional WPT. We transform the MAP problem into an optimal function placement (OFA) problem, which can be geometrically analyzed in a rectangular coordinate system and is NP-hard. First, we study the OFA problem in the case of uniformly distributed sensors with identical charging demands, we develop the uniform charging strategy (UCS) to bound the total charging expenditure as Θ(1) for any number of sensors N. Based on the acquired insights, we further studied the OFA problem when the distribution of charging demands is Gaussian. We bound the total charging expenditure as Θ(1) for any number of sensors N, by developing the layered charging strategy (LCS). Extensive simulation results confirmed the performance of our design compared with two baseline algorithms. Both of the theoretical and simulations results reveal that the total energy expenditure of the charger is strongly related to the sensors' charging demands, however, is less affected by the number of sensors in the network.
Near Optimal Charging Scheduling for 3-D Wireless Rechargeable Sensor Networks with Energy Constraints Wireless Rechargeable Sensor Network (WRSN) becomes a hot research issue in recent years owing to the breakthrough of wireless power transfer technology. Most prior arts concentrate on developing scheduling schemes in 2-D networks where mobile chargers are placed on the ground. However, few of them are suitable for 3-D scenarios, making it difficult or even impossible to popularize in practical applications. In this paper, we focus on the problem of charging a 3-D WRSN with an Unmanned Aerial Vehicle (UAV) to maximize charged energy within energy constraints. To deal with the problem, we propose a spatial discretization scheme to obtain a finite feasible charging spot set for UAV in 3-D environment and a temporal discretization scheme to determine charging duration for each charging spot. Then, we transform the problem into a submodular maximization problem with routing constraints, and present a cost-efficient approximation algorithm with a provable approximation ratio of e-1/4e(1-ε) to solve it. Lastly, extensive simulations and test-bed experiments show the superior performance of our algorithm.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Latent dirichlet allocation We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.
A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization In recent years, there has been a growing interest for the experimental analysis in the field of evolutionary algorithms. It is noticeable due to the existence of numerous papers which analyze and propose different types of problems, such as the basis for experimental comparisons of algorithms, proposals of different methodologies in comparison or proposals of use of different statistical techniques in algorithms’ comparison.In this paper, we focus our study on the use of statistical techniques in the analysis of evolutionary algorithms’ behaviour over optimization problems. A study about the required conditions for statistical analysis of the results is presented by using some models of evolutionary algorithms for real-coding optimization. This study is conducted in two ways: single-problem analysis and multiple-problem analysis. The results obtained state that a parametric statistical analysis could not be appropriate specially when we deal with multiple-problem results. In multiple-problem analysis, we propose the use of non-parametric statistical tests given that they are less restrictive than parametric ones and they can be used over small size samples of results. As a case study, we analyze the published results for the algorithms presented in the CEC’2005 Special Session on Real Parameter Optimization by using non-parametric test procedures.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Beamforming for MISO Interference Channels with QoS and RF Energy Transfer We consider a multiuser multiple-input single-output interference channel where the receivers are characterized by both quality-of-service (QoS) and radio-frequency (RF) energy harvesting (EH) constraints. We consider the power splitting RF-EH technique where each receiver divides the received signal into two parts a) for information decoding and b) for battery charging. The minimum required power that supports both the QoS and the RF-EH constraints is formulated as an optimization problem that incorporates the transmitted power and the beamforming design at each transmitter as well as the power splitting ratio at each receiver. We consider both the cases of fixed beamforming and when the beamforming design is incorporated into the optimization problem. For fixed beamforming we study three standard beamforming schemes, the zero-forcing (ZF), the regularized zero-forcing (RZF) and the maximum ratio transmission (MRT); a hybrid scheme, MRT-ZF, comprised of a linear combination of MRT and ZF beamforming is also examined. The optimal solution for ZF beamforming is derived in closed-form, while optimization algorithms based on second-order cone programming are developed for MRT, RZF and MRT-ZF beamforming to solve the problem. In addition, the joint-optimization of beamforming and power allocation is studied using semidefinite programming (SDP) with the aid of rank relaxation.
An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation. •The proposed watermarking scheme utilized improved discrete wavelet transformation (IDWT) to retrieve the invariant wavelet domain.•The entropy mechanism is used to identify the suitable region for insertion of watermark. This will improve the imperceptibility and robustness of the watermarking procedure.•The scaling factors such as PSNR and NC are considered for evaluation of the proposed method and the Particle Swarm Optimization is employed to optimize the scaling factors.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Diegetic Representations for Seamless Cross-Reality Interruptions The closed design of virtual reality (VR) head-mounted displays substantially limits users’ awareness of their real-world surroundings. This presents challenges when another person in the same physical space needs to interrupt the VR user for a brief conversation. Such interruptions, e.g., tapping a VR user on the shoulder, can cause a disruptive break in presence (BIP), which affects their place ...
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Energy-efficient scheduling of small cells in 5G: A meta-heuristic approach Scheduling of small cells in Fifth-Generation (5G) mobile network is highly important for achieving energy-efficiency and providing Quality of Service (QoS) to the applications users. Minimization of energy consumption hampers QoS. This problem has been further complicated due to exponential increase of mobile application users demanding high data rate. The performances of energy-saving approaches in the literature are limited by the fact that they exploit mere historical data-driven two state operation modes of small cells. This paper formulates the problem of scheduling small cells as a non-linear optimization problem. It then offers a meta-heuristic evolutionary algorithm to solve the problem in polynomial time. The proposed algorithm takes into account four operation states of small cells to minimize the energy consumption while satisfying the users’ QoS. The results of our performance analysis depict that the proposed algorithm outperforms the state-of-the-art works in terms of energy-saving, switching delay, etc.
Tabu search based multi-watermarks embedding algorithm with multiple description coding Digital watermarking is a useful solution for digital rights management systems, and it has been a popular research topic in the last decade. Most watermarking related literature focuses on how to resist deliberate attacks by applying benchmarks to watermarked media that assess the effectiveness of the watermarking algorithm. Only a few papers have concentrated on the error-resilient transmission of watermarked media. In this paper, we propose an innovative algorithm for vector quantization (VQ) based image watermarking, which is suitable for error-resilient transmission over noisy channels. By incorporating watermarking with multiple description coding (MDC), the scheme we propose to embed multiple watermarks can effectively overcome channel impairments while retaining the capability for copyright and ownership protection. In addition, we employ an optimization technique, called tabu search, to optimize both the watermarked image quality and the robustness of the extracted watermarks. We have obtained promising simulation results that demonstrate the utility and practicality of our algorithm. (C) 2011 Elsevier Inc. All rights reserved.
On Deployment of Wireless Sensors on 3-D Terrains to Maximize Sensing Coverage by Utilizing Cat Swarm Optimization With Wavelet Transform. In this paper, a deterministic sensor deployment method based on wavelet transform (WT) is proposed. It aims to maximize the quality of coverage of a wireless sensor network while deploying a minimum number of sensors on a 3-D surface. For this purpose, a probabilistic sensing model and Bresenham's line of sight algorithm are utilized. The WT is realized by an adaptive thresholding approach for the generation of the initial population. Another novel aspect of the paper is that the method followed utilizes a Cat Swarm Optimization (CSO) algorithm, which mimics the behavior of cats. We have modified the CSO algorithm so that it can be used for sensor deployment problems on 3-D terrains. The performance of the proposed algorithm is compared with the Delaunay Triangulation and Genetic Algorithm based methods. The results reveal that CSO based sensor deployment which utilizes the wavelet transform method is a powerful and successful method for sensor deployment on 3-D terrains.
FPGA-Based Parallel Metaheuristic PSO Algorithm and Its Application to Global Path Planning for Autonomous Robot Navigation This paper presents a field-programmable gate array (FPGA)-based parallel metaheuristic particle swarm optimization algorithm (PPSO) and its application to global path planning for autonomous robot navigating in structured environments with obstacles. This PPSO consists of three parallel PSOs along with a communication operator in one FPGA chip. The parallel computing architecture takes advantages of maintaining better population diversity and inhibiting premature convergence in comparison with conventional PSOs. The collision-free discontinuous path generated from the PPSO planner is then smoothed using the cubic B-spline and system-on-a-programmable-chip (SoPC) technology. Experimental results are conducted to show the merit of the proposed FPGA-based PPSO path planner and smoother for global path planning of autonomous mobile robot navigation.
QUasi-Affine TRansformation Evolution with External ARchive (QUATRE-EAR): An enhanced structure for Differential Evolution Optimization demands are ubiquitous in science and engineering. The key point is that the approach to tackle a complex optimization problem should not itself be difficult. Differential Evolution (DE) is such a simple method, and it is arguably a very powerful stochastic real-parameter algorithm for single-objective optimization. However, the performance of DE is highly dependent on control parameters and mutation strategies. Both tuning the control parameters and selecting the proper mutation strategy are still tedious but important tasks for users. In this paper, we proposed an enhanced structure for DE algorithm with less control parameters to be tuned. The crossover rate control parameter Cr is replaced by an automatically generated evolution matrix and the control parameter F can be renewed in an adaptive manner during the whole evolution. Moreover, an enhanced mutation strategy with time stamp mechanism is advanced as well in this paper. CEC2013 test suite for real-parameter single objective optimization is employed in the verification of the proposed algorithm. Experiment results show that our proposed algorithm is competitive with several well-known DE variants.
Hybrid Bird Swarm Optimized Quasi Affine Algorithm Based Node Location in Wireless Sensor Networks Wireless sensor networks (WSN) with the Internet of Things (IoT) play a vital key concept while performing the information transmission process. The WSN with IoT has been effectively utilized in different research contents such as network protocol selection, topology control, node deployment, location technology and network security, etc. Among that, node location is one of the crucial problems that need to be resolved to improve communication. The node location is directly influencing the network performance, lifetime and data sense. Therefore, this paper introduces the Bird Swarm Optimized Quasi-Affine Evolutionary Algorithm (BSOQAEA) to fix the node location problem in sensor networks. The proposed algorithm analyzes the node location, and incorporates the dynamic shrinking space process is to save time. The introduced evolutionary algorithm optimizes the node centroid location performed according to the received signal strength indications (RSSI). The created efficiency in the system is determined using high node location accuracy, minimum distance error, and location error.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Adam: A Method for Stochastic Optimization. We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
Untangling Blockchain: A Data Processing View of Blockchain Systems. Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core ...
Multivariate Short-Term Traffic Flow Forecasting Using Time-Series Analysis Existing time-series models that are used for short-term traffic condition forecasting are mostly univariate in nature. Generally, the extension of existing univariate time-series models to a multivariate regime involves huge computational complexities. A different class of time-series models called structural time-series model (STM) (in its multivariate form) has been introduced in this paper to develop a parsimonious and computationally simple multivariate short-term traffic condition forecasting algorithm. The different components of a time-series data set such as trend, seasonal, cyclical, and calendar variations can separately be modeled in STM methodology. A case study at the Dublin, Ireland, city center with serious traffic congestion is performed to illustrate the forecasting strategy. The results indicate that the proposed forecasting algorithm is an effective approach in predicting real-time traffic flow at multiple junctions within an urban transport network.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
G2-type SRMPC scheme for synchronous manipulation of two redundant robot arms. In this paper, to remedy the joint-angle drift phenomenon for manipulation of two redundant robot arms, a novel scheme for simultaneous repetitive motion planning and control (SRMPC) at the joint-acceleration level is proposed, which consists of two subschemes. To do so, the performance index of each SRMPC subscheme is derived and designed by employing the gradient dynamics twice, of which a convergence theorem and its proof are presented. In addition, for improving the accuracy of the motion planning and control, position error, and velocity, error feedbacks are incorporated into the forward kinematics equation and analyzed via Zhang neural-dynamics method. Then the two subschemes are simultaneously reformulated as two quadratic programs (QPs), which are finally unified into one QP problem. Furthermore, a piecewise-linear projection equation-based neural network (PLPENN) is used to solve the unified QP problem, which can handle the strictly convex QP problem in an inverse-free manner. More importantly, via such a unified QP formulation and the corresponding PLPENN solver, the synchronism of two redundant robot arms is guaranteed. Finally, two given tasks are fulfilled by 2 three-link and 2 five-link planar robot arms, respectively. Computer-simulation results validate the efficacy and accuracy of the SRMPC scheme and the corresponding PLPENN solver for synchronous manipulation of two redundant robot arms.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach The Hamilton-Jacobi-Bellman (HJB) equation corresponding to constrained control is formulated using a suitable nonquadratic functional. It is shown that the constrained optimal control law has the largest region of asymptotic stability (RAS). The value function of this HJB equation is solved for by solving for a sequence of cost functions satisfying a sequence of Lyapunov equations (LE). A neural network is used to approximate the cost function associated with each LE using the method of least-squares on a well-defined region of attraction of an initial stabilizing controller. As the order of the neural network is increased, the least-squares solution of the HJB equation converges uniformly to the exact solution of the inherently nonlinear HJB equation associated with the saturating control inputs. The result is a nearly optimal constrained state feedback controller that has been tuned a priori off-line.
Asymptotically Stable Adaptive-Optimal Control Algorithm With Saturating Actuators and Relaxed Persistence of Excitation. This paper proposes a control algorithm based on adaptive dynamic programming to solve the infinite-horizon optimal control problem for known deterministic nonlinear systems with saturating actuators and nonquadratic cost functionals. The algorithm is based on an actor/critic framework, where a critic neural network (NN) is used to learn the optimal cost, and an actor NN is used to learn the optim...
Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems. In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.
Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation In this paper we study the convergence of the Galerkin approximation method applied to the generalized Hamilton-Jacobi-Bellman (GHJB) equation over a compact set containing the origin. The GHJB equation gives the cost of an arbitrary control law and can be used to improve the performance of this control. The GHJB equation can also be used to successively approximate the Hamilton-Jacobi-Bellman equation. We state sufficient conditions that guarantee that the Galerkin approximation converges to the solution of the GHJB equation and that the resulting approximate control is stabilizing on the same region as the initial control. The method is demonstrated on a simple nonlinear system and is compared to a result obtained by using exact feedback linearization in conjunction with the LQR design method. (C) 1997 Elsevier Science Ltd. All rights reserved.
Hamiltonian-Driven Adaptive Dynamic Programming for Continuous Nonlinear Dynamical Systems. This paper presents a Hamiltonian-driven framework of adaptive dynamic programming (ADP) for continuous time nonlinear systems, which consists of evaluation of an admissible control, comparison between two different admissible policies with respect to the corresponding the performance function, and the performance improvement of an admissible control. It is showed that the Hamiltonian can serve as...
Optimal And Event-Based Networked Control Of Physically Interconnected Systems And Multi-Agent Systems Many interconnected systems like vehicle platoons or energy networks consist of similar or identical subsystems. The subsystem interconnections are either caused by the physical relations among the subsystems or have to be introduced by the controller to cope with cooperative control goals. This paper proposes strategies to reduce the complexity of the controller design problem (offline information reduction) and to reduce the amount of the system information, which is necessary for the implementation of the designed controller (online information reduction). It consists of two parts. The first part deals with the linear quadratic regulator (LQR) design problem for interconnected systems. A decomposition based on a state transformation is introduced, which allows to design the optimal controller for the interconnected system by considering modified subsystems separately. The proposed decomposition approach can be uniformly applied to multi-agent systems and physically interconnected systems.The second part of the paper introduces an event-based control strategy for multi-agent systems. The event-based control is a means to reduce the communication effort by invoking an information exchange among the subsystems only when the deviation between the estimated and current subsystem state exceeds an event threshold. An event-based controller is proposed, which mimics the continuous state-feedback controller with a desired precision. The relation between the event threshold and the approximation error is analysed.
Policy Iteration Q-Learning for Data-Based Two-Player Zero-Sum Game of Linear Discrete-Time Systems. In this article, the data-based two-player zero-sum game problem is considered for linear discrete-time systems. This problem theoretically depends on solving the discrete-time game algebraic Riccati equation (DTGARE), while it requires complete system dynamics. To avoid solving the DTGARE, the $Q$ -function is introduced and a data-based policy iteration $Q$ -learning (PIQL) algorithm is develo...
Adaptive Neural Tracking Control for Switched High-Order Stochastic Nonlinear Systems. This paper deals with adaptive neural tracking control design for a class of switched high-order stochastic nonlinear systems with unknown uncertainties and arbitrary deterministic switching. The considered issues are: 1) completely unknown uncertainties; 2) stochastic disturbances; and 3) high-order nonstrict-feedback system structure. The considered mathematical models can represent many practic...
Distributed Event-Triggered Control for Multi-Agent Systems Event-driven strategies for multi-agent systems are motivated by the future use of embedded microprocessors with limited resources that will gather information and actuate the individual agent controller updates. The controller updates considered here are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state, and are applied to a first order agreement problem. A centralized formulation is considered first and then its distributed counterpart, in which agents require knowledge only of their neighbors' states for the controller implementation. The results are then extended to a self-triggered setup, where each agent computes its next update time at the previous one, without having to keep track of the state error that triggers the actuation between two consecutive update instants. The results are illustrated through simulation examples.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Laminar: practical fine-grained decentralized information flow control This paper describes Laminar, the first system to implement decen- tralized information flow control (DIFC) using a single set of ab- stractions for OS resources and heap-allocated objects. Program- mers express security policies by labeling data with secrecy and integrity labels, and then accessing this labeled data in lexically scoped security regions. Laminar enforces the security policies specified by the labels at run time. Laminar is implemented using a modified Java virtual machine and a new Linux security mod- ule. This paper shows that security regions ease incremental de- ployment and limit dynamic security checks, allowing us to retrofit DIFC policies on four application case studies. Replacing the ap- plications' ad-hoc policies changes less than 10% of the code, and incurs performance overheads from 4% to 84%. Whereas prior sys- tems only supported limited types of multithreaded programs, Lam- inar supports a more general class of multithreaded DIFC programs that can access heterogeneously labeled data.
Adding Force Feedback to Mixed Reality Experiences and Games using Electrical Muscle Stimulation. We present a mobile system that enhances mixed reality experiences and games with force feedback by means of electrical muscle stimulation (EMS). The benefit of our approach is that it adds physical forces while keeping the users' hands free to interact unencumbered-not only with virtual objects, but also with physical objects, such as props and appliances. We demonstrate how this supports three classes of applications along the mixed-reality continuum: (1) entirely virtual objects, such as furniture with EMS friction when pushed or an EMS-based catapult game. (2) Virtual objects augmented via passive props with EMS-constraints, such as a light control panel made tangible by means of a physical cup or a balance-the-marble game with an actuated tray. (3) Augmented appliances with virtual behaviors, such as a physical thermostat dial with EMS-detents or an escape-room that repurposes lamps as levers with detents. We present a user-study in which participants rated the EMS-feedback as significantly more realistic than a no-EMS baseline.
Deep Learning in Mobile and Wireless Networking: A Survey. The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.01339
0.014921
0.013095
0.012449
0.010558
0.010159
0.008889
0.00256
0.000026
0
0
0
0
0
Optical Fiber Sensor Performance Evaluation in Soft Polyimide Film with Different Thickness Ratios. To meet the application requirements of curvature measurement for soft biomedical robotics and flexible morphing wings of aircraft, the optical fiber Bragg grating (FBG) shape sensor for soft robots and flexible morphing wing was implemented. This optical FBG is embedded in polyimide film and then fixed in the body of a soft robot and morphing wing. However, a lack of analysis on the embedded depth of FBG sensors in polyimide film and its sensitivity greatly limits their application potential. Herein, the relationship between the embedded depth of the FBG sensor in polyimide film and its sensitivity and stability are investigated. The sensing principle and structural design of the FBG sensor embedded in polyimide film are introduced; the bending curvatures of the FBG sensor and its wavelength shift in polyimide film are studied; and the relationship between the sensitivity, stability, and embedded depth of these sensors are verified experimentally. The results showed that wavelength shift and curvature have a linear relationship. With the sensor's curvature ranging from 0 m(-1) to 30 m(-1), their maximum sensitivity is 50.65 pm/m(-1), and their minimum sensitivity is 1.96 pm/m(-1). The designed FBG sensor embedded in polyimide films shows good consistency in repeated experiments for soft actuator and morphing wing measurement; the FBG sensing method therefore has potential for real applications in shape monitoring in the fields of soft robotics and the flexible morphing wings of aircraft.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An anonymous and secure biometric‐based enterprise digital rights management system for mobile environment Internet-based content distribution facilitates an efficient platform to sell the digital content to the remote users. However, the digital content can be easily copied and redistributed over the network, which causes huge loss to the right holders. On the contrary, the digital rights management (DRM) systems have been introduced in order to regulate authorized content distribution. Enterprise DRM (E-DRM) system is an application of DRM technology, which aims to prevent illegal access of data in an enterprise. Earlier works on E-DRM do not address anonymity, which may lead to identity theft. Recently, Chang et al. proposed an efficient E-DRM mechanism. Their scheme provides greater efficiency and protects anonymity. Unfortunately, we identify that their scheme does not resist the insider attack and password-guessing attack. In addition, Chang et al.'s scheme has some design flaws in the authorization phase. We then point out the requirements of E-DRM system and present the cryptanalysis of Chang et al.'s scheme. In order to remedy the security weaknesses found in Chang et al.'s scheme, we aim to present a secure and efficient E-DRM scheme. The proposed scheme supports the authorized content key distribution and satisfies the desirable security attributes. Additionally, our scheme offers low communication and computation overheads and user's anonymity as well. Through the rigorous formal and informal security analyses, we show that our scheme is secure against possible known attacks. Furthermore, the simulation results for the formal security analysis using the widely accepted Automated Validation of Internet Security Protocols and Applications tool ensure that our scheme is also secure. Copyright (C) 2015 John Wiley & Sons, Ltd.
A Certificateless Authenticated Key Agreement Protocol for Digital Rights Management System.
An Improved RSA Based User Authentication and Session Key Agreement Protocol Usable in TMIS. Recently, Giri et al.'s proposed a RSA cryptosystem based remote user authentication scheme for telecare medical information system and claimed that the protocol is secure against all the relevant security attacks. However, we have scrutinized the Giri et al.'s protocol and pointed out that the protocol is not secure against off-line password guessing attack, privileged insider attack and also suffers from anonymity problem. Moreover, the extension of password guessing attack leads to more security weaknesses. Therefore, this protocol needs improvement in terms of security before implementing in real-life application. To fix the mentioned security pitfalls, this paper proposes an improved scheme over Giri et al.'s scheme, which preserves user anonymity property. We have then simulated the proposed protocol using widely-accepted AVISPA tool which ensures that the protocol is SAFE under OFMC and CL-AtSe models, that means the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. The informal cryptanalysis has been also presented, which confirmed that the proposed protocol provides well security protection on the relevant security attacks. The performance analysis section compares the proposed protocol with other existing protocols in terms of security and it has been observed that the protocol provides more security and achieves additional functionalities such as user anonymity and session key verification.
A privacy enabling content distribution framework for digital rights management
Privacy Preserving Location-based Content Distribution Framework for Digital Rights Management Systems Advancement in network technology provides an opportunity for e-commerce industries to sell digital content. However, multimedia content has the drawback of easy copy and redistribution, which causes rampant piracy. Digital rights management (DRM) systems are developed to address content piracy. Basically, DRM focuses to control content consumption and distribution. In general, to provide copyrigh...
A robust and flexible digital rights management system for home networks A robust and flexible Digital Rights Management system for home networks is presented. In the proposed system, the central authority delegates its authorization right to the local manager in a home network by issuing a proxy certificate, and the local manager flexibly controls the access rights of home devices on digital contents with its proxy certificate. Furthermore, the proposed system provides a temporary accessing facility for external devices and achieves strong privacy for home devices. For the validation of delegated rights and the revocation of compromised local managers, a hybrid mechanism combining OCSP validation and periodic renewal of proxy certificates is also presented.
Fuzzy logic in control systems: fuzzy logic controller. I.
A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize. An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees' swarming around their hive is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used for optimizing multivariable functions and the results produced by ABC, Genetic Algorithm (GA), Particle Swarm Algorithm (PSO) and Particle Swarm Inspired Evolutionary Algorithm (PS-EA) have been compared. The results showed that ABC outperforms the other algorithms.
Toward Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions The ever-increasing number of resource-constrained machine-type communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced mobile broadband (eMBB), massive machine type communications (mMTCs), and ultra-reliable and low latency communications (URLLCs), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include quality of service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead, and radio access network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy random access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions toward addressing RAN congestion problem, and then identify potential advantages, challenges, and use cases for the applications of emerging machine learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula> -learning approach in the mMTC scenario along with the recent advances toward enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions.
Priced Oblivious Transfer: How to Sell Digital Goods We consider the question of protecting the privacy of customers buying digital goods. More specifically, our goal is to allow a buyer to purchase digital goods from a vendor without letting the vendor learn what, and to the extent possible also when and how much, it is buying. We propose solutions which allow the buyer, after making an initial deposit, to engage in an unlimited number of priced oblivious-transfer protocols, satisfying the following requirements: As long as the buyer's balance contains sufficient funds, it will successfully retrieve the selected item and its balance will be debited by the item's price. However, the buyer should be unable to retrieve an item whose cost exceeds its remaining balance. The vendor should learn nothing except what must inevitably be learned, namely, the amount of interaction and the initial deposit amount (which imply upper bounds on the quantity and total price of all information obtained by the buyer). In particular, the vendor should be unable to learn what the buyer's current balance is or when it actually runs out of its funds. The technical tools we develop, in the process of solving this problem, seem to be of independent interest. In particular, we present the first one-round (two-pass) protocol for oblivious transfer that does not rely on the random oracle model (a very similar protocol was independently proposed by Naor and Pinkas [21]). This protocol is a special case of a more general "conditional disclosure" methodology, which extends a previous approach from [11] and adapts it to the 2-party setting.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
Wireless Networks with RF Energy Harvesting: A Contemporary Survey Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirements. In this paper, we present a comprehensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RFEHNs according to the network types, i.e., single-hop networks, multi-antenna networks, relay networks, and cognitive radio networks. Finally, we envision some open research directions.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
A Muscle Synergy-Driven ANFIS Approach to Predict Continuous Knee Joint Movement Continuous motion prediction plays a significant role in realizing seamless control of robotic exoskeletons and orthoses. Explicitly modeling the relationship between coordinated muscle activations from surface electromyography (sEMG) and human limb movements provides a new path of sEMG-based human–machine interface. Instead of the numeric features from individual channels, we propose a muscle synergy-driven adaptive network-based fuzzy inference system (ANFIS) approach to predict continuous knee joint movements, in which muscle synergy reflects the motor control information to coordinate muscle activations for performing movements. Four human subjects participated in the experiment while walking at five types of speed: 2.0 km/h, 2.5 km/h, 3.0 km/h, 3.5 km/h, and 4.0 km/h. The study finds that the acquired muscle synergies associate the muscle activations with human joint movements in a low-dimensional space and have been further utilized for predicting knee joint angles. The proposed approach outperformed commonly used numeric features from individual sEMG channels with an average correlation coefficient of 0.92 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$ \pm $</tex-math></inline-formula> 0.05. Results suggest that the correlation between muscle activations and knee joint movements is captured by the muscle synergy-driven ANFIS model and can be utilized for the estimation of continuous joint angles.
1.11
0.1
0.1
0.1
0.1
0.076667
0
0
0
0
0
0
0
0
Adaptive Event-Triggered Control of Nonlinear Systems With Controller and Parameter Estimator Triggering In this note, the event-triggered adaptive control for a class of uncertain nonlinear systems is considered. Unlike the traditional adaptive event-triggered control, the controller and parameter estimator are event-triggered simultaneously. Asymptotical convergence of stabilization error is guaranteed. To solve this problem, we design a set of event-triggering conditions, which is updated for each triggering. At the same time, the input-to-state stable assumption is not needed. It is shown that the proposed control schemes guarantee that all the closed-loop signals are globally bounded and the stabilization error converges to the origin asymptotically. Simulation results illustrate the effectiveness of our scheme.
A Stability Guaranteed Robust Fault Tolerant Control Design for Vehicle Suspension Systems Subject to Actuator Faults and Disturbances A fault tolerant control approach based on a novel sliding mode method is proposed in this brief for a full vehicle suspension system. The proposed approach aims at retaining system stability in the presence of model uncertainties, actuator faults, parameter variations, and neglected nonlinear effects. The design is based on a realistic model that includes road uncertainties, disturbances, and faults. The design begins by dividing the system into two subsystems: a first subsystem with 3 degrees-of-freedom (DoF) representing the chassis and a second subsystem with 4 DoF representing the wheels, electrohydraulic actuators, and effect of road disturbances and actuator faults. Based on the analysis of the system performance, the first subsystem is considered as the internal dynamic of the whole system for control design purposes. The proposed algorithm is implemented in two stages to provide a stability guaranteed approach. A robust optimal sliding mode controller is designed first for the uncertain internal dynamics of the system to mitigate the effect of road disturbances. Then, a robust sliding mode controller is proposed to handle actuator faults and ensure overall stability of the whole system. The proposed approach has been tested on a 7-DoF full car model subject to uncertainties and actuator faults. The results are compared with the ones obtained using approach. The proposed approach optimizes riding comfort and road holding ability even in the presence of actuator faults and parameter variations.
Neural Learning Control of Strict-Feedback Systems Using Disturbance Observer. This paper studies the compound learning control of disturbed uncertain strict-feedback systems. The design is using the dynamic surface control equipped with a novel learning scheme. This paper integrates the recently developed online recorded data-based neural learning with the nonlinear disturbance observer (DOB) to achieve good ``understanding'' of the system uncertainty including unknown dynamics and time-varying disturbance. With the proposed method to show how the neural networks and DOB are cooperating with each other, one indicator is constructed and included into the update law. The closed-loop system stability analysis is rigorously presented. Different kinds of disturbances are considered in a third-order system as simulation examples and the results confirm that the proposed method achieves higher tracking accuracy while the compound estimation is much more precise. The design is applied to the flexible hypersonic flight dynamics and a better tracking performance is obtained.
Distributed Model-Based Event-Triggered Leader–Follower Consensus Control for Linear Continuous-Time Multiagent Systems This article investigates the event-triggered leader–follower consensus control problem for linear continuous-time multiagent systems (MASs). A new consensus protocol and an event-triggered communication (ETC) strategy based on a closed-loop state estimator are designed. The closed-looped state estimator renders us more accurate state estimations, therefore the triggering times can be decreased wh...
Adaptive Fuzzy Backstepping-Based Formation Control of Unmanned Surface Vehicles With Unknown Model Nonlinearity and Actuator Saturation In this article, the formation control of unmanned surface vehicles (USVs) is addressed considering actuator saturation and unknown nonlinear items. The algorithm can be divided into two parts, steering the leader USV to trace along the desired path and steering the follower USV to follow the leader in the desired formation. In the proposed formation control framework, a virtual USV is first constructed so that the leader USV can be guided to the desired path. To solve the input constraint problem, an auxiliary is introduced, and the adaptive fuzzy method is used to estimate unknown nonlinear items in the USV. To maintain the desired formation, the desired velocities of follower USVs are deduced using geometry and Lyapunov stability theories; the stability of the closed-loop system is also proved. Finally, the effectiveness of the proposed approach is demonstrated by the simulation and experimental results.
Point-to-point navigation of underactuated ships This paper considers point-to-point navigation of underactuated ships where only surge force and yaw moment are available. In general, a ship’s sway motion satisfies a passive-boundedness property which is expressed in terms of a Lyapunov function. Under this kind of consideration, a certain concise nonlinear scheme is proposed to guarantee the closed-loop system to be uniformly ultimately bounded (UUB). A numerical simulation study is also performed to illustrate the effectiveness of the proposed scheme.
Output-feedback stochastic nonlinear stabilization The authors present the first result on global output-feedback stabilization (in probability) for stochastic nonlinear continuous-time systems. The class of systems that they consider is a stochastic counterpart of the broadest class of deterministic systems for which globally stabilizing controllers are currently available. Their controllers are “inverse optimal” and possess an infinite gain margin. A reader of the paper needs no prior familiarity with techniques of stochastic control
Observer-Based Adaptive Backstepping Consensus Tracking Control for High-Order Nonlinear Semi-Strict-Feedback Multiagent Systems. Combined with backstepping techniques, an observer-based adaptive consensus tracking control strategy is developed for a class of high-order nonlinear multiagent systems, of which each follower agent is modeled in a semi-strict-feedback form. By constructing the neural network-based state observer for each follower, the proposed consensus control method solves the unmeasurable state problem of hig...
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Fault Injection Techniques and Tools Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, the authors use an experiment-based approach for studying system dependability. This approach is applied during the conception, design, prototype, and operational phases. To take an experiment-based approach, you must first understand a system's architecture, structure, and behavior. You need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms,and you need specific instruments and tools to inject faults, create failures or errors, and monitor their effects. Engineers most often use low-cost, simulation-based fault injection to evaluate the dependability of a system that is in the conceptual and design phases. At this point, the system under study is only a series of high-level abstractions; implementation details have yet to be determined. Thus the system is simulated on the basis of simplified assumptions. Simulation-based fault injection, which assumes that errors or failures occur according to predetermined distribution, is useful for evaluating the effectiveness of fault-tolerant mechanisms and a system's dependability; it does provide timely feedback to system engineers. However, it requires accurate input parameters, which are difficult to supply: Design and technology changes often complicate the use of past measurements. Testing a prototype, on the other hand, allows you to evaluate the system without any assumptions about system design. Instead of injecting faults, engineers can directly measure operational systems as they handle real workloads.Measurement-based analysis uses actual data, which contains much information about naturally occurring errors and failures and sometimes about recovery attempts. Although these three experimental methods have limitations, their unique values complement one another and allow for a wide spectrum of dependability studies.
Data Collection in Wireless Sensor Networks with Mobile Elements: A Survey Wireless sensor networks (WSNs) have emerged as an effective solution for a wide range of applications. Most of the traditional WSN architectures consist of static nodes which are densely deployed over a sensing area. Recently, several WSN architectures based on mobile elements (MEs) have been proposed. Most of them exploit mobility to address the problem of data collection in WSNs. In this article we first define WSNs with MEs and provide a comprehensive taxonomy of their architectures, based on the role of the MEs. Then we present an overview of the data collection process in such a scenario, and identify the corresponding issues and challenges. On the basis of these issues, we provide an extensive survey of the related literature. Finally, we compare the underlying approaches and solutions, with hints to open problems and future research directions.
Human visual system based data embedding method using quadtree partitioning In this paper, we proposed a data embedding method based on human visual system (HVS) and quadtree partitioning. For most HVS-based methods, the amount of embedded data is based on the measurement of differences of pixel pairs or the standard deviation of image blocks. However, these methods often result in larger image distortion and are vulnerable to statistical attacks. The proposed method employs a specially designed function to measure the complexity of image blocks, and uses quadtree partitioning to partition images into blocks with different sizes. Larger blocks are associated with smooth regions in images whereas smaller blocks are associated with complex regions. Therefore, we embed less data into larger blocks to preserve the image quality and embed more data into smaller blocks to increase the payload. Data embedment is done by using the diamond encoding technique. Experimental results revealed that the proposed method provides better image quality and offers higher payload compared to other HVS-based methods.
A Covert Channel Over VoLTE via Adjusting Silence Periods. Covert channels represent unforeseen communication methods that exploit authorized overt communication as the carrier medium for covert messages. Covert channels can be a secure and effective means of transmitting confidential information hidden in overt traffic. For covert timing channel, the covert message is usually modulated into inter-packet delays (IPDs) of legitimate traffic, which is not suitable for voice over LTE (VoLTE) since the IPDs of VoLTE traffic are fixed to lose the possibility of being modulated. For this reason, we propose a covert channel via adjusting silence periods, which modulates covert message by the postponing or extending silence periods in VoLTE traffic. To keep the robustness, we employ the Gray code to encode the covert message to reduce the impact of packet loss. Moreover, the proposed covert channel enables the tradeoff between the robustness and voice quality which is an important performance indicator for VoLTE. The experiment results show that the proposed covert channel is undetectable by statistical tests and outperforms the other covert channels based on IPDs in terms of robustness.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.073333
0.066667
0.066667
0.066667
0.066667
0.033333
0.004444
0.002121
0
0
0
0
0
0
Adaptive Event-Triggered Control of Uncertain Nonlinear Systems Using Intermittent Output Only Although rich collection of research results on event-triggered control exist, no effort has ever been made in integrating state/output triggering and controller triggering simultaneously with backstepping control design. The primary objective of this article is, by using intermittent output signal only, to build a backstepping adaptive event-triggered feedback control for a class of uncertain nonlinear systems. To do so, we need to tackle three technical obstacles. First, the nature of the event triggering makes the transmitted output signal discontinuous, rendering the regular recursive backstepping design method inapplicable as the repetitive differentiation of virtual control signals is literally undefined. Second, the effects arisen from the event-triggering action must be properly accommodated, but the current compensating method only works for systems in normal form, thus a new method needs to be developed in order to handle nonnormal form systems. Third, as only intermittent output signal is available, and at the same time, the impacts of certain terms containing unknown parameters (arising from event triggering) need to be compensated, it is rather challenging to design a suitable state observer. To circumvent these difficulties, we employ the dynamic filtering technique to avoid the differentiation of virtual control signals in the backstepping design, construct a new compensation scheme to deal with the effects of output triggering, and build a new form of state observer to facilitate the development of output feedback control. It is shown that, with the derived adaptive backstepping output-triggered control, all the closed-loop signals are ensured bounded and the transient system performance in the mean square error sense can be adjusted by appropriately adjusting design parameters. The benefits and effectiveness of the proposed scheme are also validated by numerical simulation.
Fuzzy Adaptive Tracking Control of Wheeled Mobile Robots With State-Dependent Kinematic and Dynamic Disturbances Unlike most works based on pure nonholonomic constraint, this paper proposes a fuzzy adaptive tracking control method for wheeled mobile robots, where unknown slippage occurs and violates the nonholononomic constraint in the form of state-dependent kinematic and dynamic disturbances. These disturbances degrade tracking performance significantly and, therefore, should be compensated. To this end, the kinematics with state-dependent disturbances are rigorously derived based on the general form of slippage in the mobile robots, and fuzzy adaptive observers together with parameter adaptation laws are designed to estimate the state-dependent disturbances in both kinematics and dynamics. Because of the modular structure of the proposed method, it can be easily combined with the previous controllers based on the model with the pure nonholonomic constraint, such that the combination of the fuzzy adaptive observers with the previously proposed backstepping-like feedback linearization controller can guarantee the trajectory tracking errors to be globally ultimately bounded, even when the nonholonomic constraint is violated, and their ultimate bounds can be adjusted appropriately for various types of trajectories in the presence of large initial tracking errors and disturbances. Both the stability analysis and simulation results are provided to validate the proposed controller.
Leader-following consensus in second-order multi-agent systems with input time delay: An event-triggered sampling approach. This paper analytically investigates an event-triggered leader-following consensus in second-order multi-agent systems with time delay in the control input. Each agent׳s update of control input is driven by properly defined event, which depends on the measurement error, the states of its neighboring agents at their individual time instants, and an exponential decay function. Necessary and sufficient conditions are presented to ensure a leader-following consensus. Moreover, the control is updated only when the event-triggered condition is satisfied, which significantly decreases the number of communication among nodes, avoided effectively the continuous communication of the information channel among agents and excluded the Zeno-behavior of triggering time sequences. A numerical simulation example is given to illustrate the theoretical results.
Adaptive neural control for a class of stochastic nonlinear systems by backstepping approach. This paper addresses adaptive neural control for a class of stochastic nonlinear systems which are not in strict-feedback form. Based on the structural characteristics of radial basis function (RBF) neural networks (NNs), a backstepping design approach is extended from stochastic strict-feedback systems to a class of more general stochastic nonlinear systems. In the control design procedure, RBF NNs are used to approximate unknown nonlinear functions and the backstepping technique is utilized to construct the desired controller. The proposed adaptive neural controller guarantees that all the closed-loop signals are bounded and the tracking error converges to a sufficiently small neighborhood of the origin. Two simulation examples are used to illustrate the effectiveness of the proposed approach.
Prescribed Performance Adaptive Fuzzy Containment Control for Nonlinear Multiagent Systems Using Disturbance Observer This article focuses on the containment control problem for nonlinear multiagent systems (MASs) with unknown disturbance and prescribed performance in the presence of dead-zone output. The fuzzy-logic systems (FLSs) are used to approximate the unknown nonlinear function, and a nonlinear disturbance observer is used to estimate unknown external disturbances. Meanwhile, a new distributed containment control scheme is developed by utilizing the adaptive compensation technique without assumption of the boundary value of unknown disturbance. Furthermore, a Nussbaum function is utilized to cope with the unknown control coefficient, which is caused by the nonlinearity in the output mechanism. Moreover, a second-order tracking differentiator (TD) is introduced to avoid the repeated differentiation of the virtual controller. The outputs of the followers converge to the convex hull spanned by the multiple dynamic leaders. It is shown that all the signals are semiglobally uniformly ultimately bounded (SGUUB), and the local neighborhood containment errors can converge into the prescribed boundary. Finally, the effectiveness of the approach proposed in this article is illustrated by simulation results.
Distributed Observer-Based Cooperative Control Approach for Uncertain Nonlinear MASs Under Event-Triggered Communication The distributed tracking problem for uncertain nonlinear multiagent systems (MASs) under event-triggered communication is an important issue. However, existing results provide solutions that can only ensure stability with bounded tracking errors, as asymptotic tracking is difficult to be achieved mainly due to the errors caused by event-triggering mechanisms and system uncertainties. In this artic...
Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN...
Massive MIMO for next generation wireless systems Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
Adaptive Federated Learning in Resource Constrained Edge Computing Systems Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a cen...
A new optimization method: big bang-big crunch Nature is the principal source for proposing new optimization methods such as genetic algorithms (GA) and simulated annealing (SA) methods. All traditional evolutionary algorithms are heuristic population-based search procedures that incorporate random variation and selection. The main contribution of this study is that it proposes a novel optimization method that relies on one of the theories of the evolution of the universe; namely, the Big Bang and Big Crunch Theory. In the Big Bang phase, energy dissipation produces disorder and randomness is the main feature of this phase; whereas, in the Big Crunch phase, randomly distributed particles are drawn into an order. Inspired by this theory, an optimization algorithm is constructed, which will be called the Big Bang-Big Crunch (BB-BC) method that generates random points in the Big Bang phase and shrinks those points to a single representative point via a center of mass or minimal cost approach in the Big Crunch phase. It is shown that the performance of the new (BB-BC) method demonstrates superiority over an improved and enhanced genetic search algorithm also developed by the authors of this study, and outperforms the classical genetic algorithm (GA) for many benchmark test functions.
Secure and privacy preserving keyword searching for cloud storage services Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.
A review on interval type-2 fuzzy logic applications in intelligent control. A review of the applications of interval type-2 fuzzy logic in intelligent control has been considered in this paper. The fundamental focus of the paper is based on the basic reasons for using type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques.
Design of robust fuzzy fault detection filter for polynomial fuzzy systems with new finite frequency specifications This paper investigates the problem of fault detection filter design for discrete-time polynomial fuzzy systems with faults and unknown disturbances. The frequency ranges of the faults and the disturbances are assumed to be known beforehand and to reside in low, middle or high frequency ranges. Thus, the proposed filter is designed in the finite frequency range to overcome the conservatism generated by those designed in the full frequency domain. Being of polynomial fuzzy structure, the proposed filter combines the H−/H∞ performances in order to ensure the best robustness to the disturbance and the best sensitivity to the fault. Design conditions are derived in Sum Of Squares formulations that can be easily solved via available software tools. Two illustrative examples are introduced to demonstrate the effectiveness of the proposed method and a comparative study with LMI method is also provided.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
0
Stability Analysis for Delayed Neural Networks via Some Switching Methods In this paper, the stability problem of delayed neural networks is investigated by adopting some switching methods. First, the delay interval is divided into many smaller variable intervals, and when the smaller variable interval is regarded as a mode of the delay, delayed neural networks are modeled as switched systems. Then, by using some switching methods, less conservative stability criteria are derived to ensure the stability of delayed neural networks. Finally, one example is provided to show the effectiveness of the obtained criteria.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Contact personalization using a score understanding method This paper presents a method to interpret the output of a classification (or regression) model. The interpretation is based on two concepts: the variable importance and the value importance of the variable. Unlike most of the state of art interpretation methods, our approach allows the interpretation of the model output for every instance. Understanding the score given by a model for one instance can for example lead to an immediate decision in a customer relational management (CRM) system. Moreover the proposed method does not depend on a particular model and is therefore usable for any model or software used to produce the scores.
Image saliency: From intrinsic to extrinsic context We propose a novel framework for automatic saliency estimation in natural images. We consider saliency to be an anomaly with respect to a given context that can be global or local. In the case of global context, we estimate saliency in the whole image relative to a large dictionary of images. Unlike in some prior methods, this dictionary is not annotated, i.e., saliency is assumed unknown. In the case of local context, we partition the image into patches and estimate saliency in each patch relative to a large dictionary of un-annotated patches from the rest of the image. We propose a unified framework that applies to both cases in three steps. First, given an input (image or patch) we extract k nearest neighbors from the dictionary. Then, we geometrically warp each neighbor to match the input. Finally, we derive the saliency map from the mean absolute error between the input and all its warped neighbors. This algorithm is not only easy to implement but also outperforms state-of-the-art methods.
Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision based problems. However, deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest to develop explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose Grad-CAM++ to provide better visual explanations of CNN model predictions (when compared to Grad-CAM), in terms of better localization of objects as well as explaining occurrences of multiple objects of a class in a single image. We provide a mathematical explanation for the proposed method, Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ indeed provides better visual explanations for a given CNN architecture when compared to Grad-CAM.
Diversity in Machine Learning. Machine learning methods have achieved good performance and been widely applied in various real-world applications. They can learn the model adaptively and be better fit for special requirements of different tasks. Generally, a good machine learning system is composed of plentiful training data, a good model training process, and an accurate inference. Many factors can affect the performance of the machine learning process, among which the diversity of the machine learning process is an important one. The diversity can help each procedure to guarantee a totally good machine learning: diversity of the training data ensures that the training data can provide more discriminative information for the model, diversity of the learned model (diversity in parameters of each model or diversity among different base models) makes each parameter/model capture unique or complement information and the diversity in inference can provide multiple choices each of which corresponds to a specific plausible local optimal result. Even though diversity plays an important role in the machine learning process, there is no systematical analysis of the diversification in the machine learning system. In this paper, we systematically summarize the methods to make data diversification, model diversification, and inference diversification in the machine learning process. In addition, the typical applications where the diversity technology improved the machine learning performance have been surveyed including the remote sensing imaging tasks, machine translation, camera relocalization, image segmentation, object detection, topic modeling, and others. Finally, we discuss some challenges of the diversity technology in machine learning and point out some directions in future work. Our analysis provides a deeper understanding of the diversity technology in machine learning tasks and hence can help design and learn more effective models for real-world applications.
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Fuzzy logic in control systems: fuzzy logic controller. I.
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
Optimization Of Radio And Computational Resources For Energy Efficiency In Latency-Constrained Application Offloading Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Noninterference for a Practical DIFC-Based Operating System The Flume system is an implementation of decentralized information flow control (DIFC) at the operating system level. Prior work has shown Flume can be implemented as a practical extension to the Linux operating system, allowing real Web applications to achieve useful security guarantees. However, the question remains if the Flume system is actually secure. This paper compares Flume with other recent DIFC systems like Asbestos, arguing that the latter is inherently susceptible to certain wide-bandwidth covert channels, and proving their absence in Flume by means of a noninterference proof in the communicating sequential processes formalism.
Large System Analysis of Cooperative Multi-Cell Downlink Transmission via Regularized Channel Inversion with Imperfect CSIT In this paper, we analyze the ergodic sum-rate of a multi-cell downlink system with base station (BS) cooperation using regularized zero-forcing (RZF) precoding. Our model assumes that the channels between BSs and users have independent spatial correlations and imperfect channel state information at the transmitter (CSIT) is available. Our derivations are based on large dimensional random matrix theory (RMT) under the assumption that the numbers of antennas at the BS and users approach to infinity with some fixed ratios. In particular, a deterministic equivalent expression of the ergodic sum-rate is obtained and is instrumental in getting insight about the joint operations of BSs, which leads to an efficient method to find the asymptotic-optimal regularization parameter for the RZF. In another application, we use the deterministic channel rate to study the optimal feedback bit allocation among the BSs for maximizing the ergodic sum-rate, subject to a total number of feedback bits constraint. By inspecting the properties of the allocation, we further propose a scheme to greatly reduce the search space for optimization. Simulation results demonstrate that the ergodic sum-rates achievable by a subspace search provides comparable results to those by an exhaustive search under various typical settings.
Global Adaptive Dynamic Programming for Continuous-Time Nonlinear Systems This paper presents a novel method of global adaptive dynamic programming (ADP) for the adaptive optimal control of nonlinear polynomial systems. The strategy consists of relaxing the problem of solving the Hamilton-Jacobi-Bellman (HJB) equation to an optimization problem, which is solved via a new policy iteration method. The proposed method distinguishes from previously known nonlinear ADP methods in that the neural network approximation is avoided, giving rise to signicant computational improvement. Instead of semiglobally or locally stabilizing, the resultant control policy is globally stabilizing for a general class of nonlinear polynomial systems. Furthermore, in the absence of the a priori knowledge of the system dynamics, an online learning method is devised to implement the proposed policy iteration technique by generalizing the current ADP theory. Finally, three numerical examples are provided to validate the effectiveness of the proposed method.
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.1
0.002597
0
0
0
0
0
0
0
0
0
Using Battery Storage for Peak Shaving and Frequency Regulation: Joint Optimization for Superlinear Gains We consider using a battery storage system simultaneously for peak shaving and frequency regulation through a joint optimization framework, which captures battery degradation, operational constraints, and uncertainties in customer load and regulation signals. Under this framework, using real data we show the electricity bill of users can be reduced by up to 12%. Furthermore, we demonstrate that th...
Local Load Redistribution Attacks in Power Systems With Incomplete Network Information Power grid is one of the most critical infrastructures in a nation and could suffer a variety of cyber attacks. Recent studies have shown that an attacker can inject pre-determined false data into smart meters such that it can pass the residue test of conventional state estimator. However, the calculation of the false data vector relies on the network (topology and parameter) information of the entire grid. In practice, it is impossible for an attacker to obtain all network information of a power grid. Unfortunately, this does not make power systems immune to false data injection attacks. In this paper, we propose a local load redistribution attacking model based on incomplete network information and show that an attacker only needs to obtain the network information of the local attacking region to inject false data into smart meters in the local region without being detected by the state estimator. Simulations on the modified IEEE 14-bus system demonstrate the correctness and effectiveness of the proposed model. The results of this paper reveal the mechanism of local false data injection attacks and highlight the importance and complexity of defending power systems against false data injection attacks.
Demand Response for Residential Appliances via Customer Reward Scheme. This paper proposes a reward based demand response algorithm for residential customers to shave network peaks. Customer survey information is used to calculate various criteria indices reflecting their priority and flexibility. Criteria indices and sensitivity based house ranking is used for appropriate load selection in the feeder for demand response. Customer Rewards (CR) are paid based on load shift and voltage improvement due to load adjustment. The proposed algorithm can be deployed in residential distribution networks using a two-level hierarchical control scheme. Realistic residential load model consisting of non-controllable and controllable appliances is considered in this study. The effectiveness of the proposed demand response scheme on the annual load growth of the feeder is also investigated. Simulation results show that reduced peak demand, improved network voltage performance, and customer satisfaction can be achieved.
Blockchain and Computational Intelligence Inspired Incentive-Compatible Demand Response in Internet of Electric Vehicles. By leveraging the charging and discharging capabilities of Internet of electric vehicles (IoEV), demand response (DR) can be implemented in smart cities to enable intelligent energy scheduling and trading. However, IoEV-based DR confronts many challenges, such as a lack of incentive mechanism, privacy leakage, and security threats. This motivates us to develop a distributed, privacy-preserved, and...
Deep Reinforcement Learning-based Capacity Scheduling for PV-Battery Storage System Investor-owned photovoltaic-battery storage systems (PV-BSS) can gain revenue by providing stacked services, including PV charging and frequency regulation, and by performing energy arbitrage. Capacity scheduling (CS) is a crucial component of PV-BSS energy management, aiming to ensure the secure and economic operation of the PV-BSS. This article proposes a Proximal Policy Optimization (PPO)-based...
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
JPEG Error Analysis and Its Applications to Digital Image Forensics JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
The FERET Evaluation Methodology for Face-Recognition Algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.
Neural fitted q iteration – first experiences with a data efficient neural reinforcement learning method This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation. •The proposed watermarking scheme utilized improved discrete wavelet transformation (IDWT) to retrieve the invariant wavelet domain.•The entropy mechanism is used to identify the suitable region for insertion of watermark. This will improve the imperceptibility and robustness of the watermarking procedure.•The scaling factors such as PSNR and NC are considered for evaluation of the proposed method and the Particle Swarm Optimization is employed to optimize the scaling factors.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Plan-And-Write: Towards Better Automatic Storytelling. Automatic storytelling is challenging since it requires generating long, coherent natural language to describes a sensible sequence of events. Despite considerable efforts on automatic story generation in the past, prior work either is restricted in plot planning, or can only generate stories in a narrow domain. In this paper, we explore open-domain story generation that writes stories given a title (topic) as input. We propose a plan-and-write hierarchical generation framework that first plans a storyline, and then generates a story based on the storyline. We compare two planning strategies. The dynamic schema interweaves story planning and its surface realization in text, while the static schema plans out the entire storyline before generating stories. Experiments show that with explicit storyline planning, the generated stories are more diverse, coherent, and on topic than those generated without creating a full plan, according to both automatic and human evaluations.
NLTK: the natural language toolkit The Natural Language Toolkit is a suite of program modules, data sets, tutorials and exercises, covering symbolic and statistical natural language processing. NLTK is written in Python and distributed under the GPL open source license. Over the past three years, NLTK has become popular in teaching and research. We describe the toolkit and report on its current state of development.
WP:clubhouse?: an exploration of Wikipedia's gender imbalance Wikipedia has rapidly become an invaluable destination for millions of information-seeking users. However, media reports suggest an important challenge: only a small fraction of Wikipedia's legion of volunteer editors are female. In the current work, we present a scientific exploration of the gender imbalance in the English Wikipedia's population of editors. We look at the nature of the imbalance itself, its effects on the quality of the encyclopedia, and several conflict-related factors that may be contributing to the gender gap. Our findings confirm the presence of a large gender gap among editors and a corresponding gender-oriented disparity in the content of Wikipedia's articles. Further, we find evidence hinting at a culture that may be resistant to female participation.
Understanding Back-Translation at Scale. An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMTu002714 English-German test set.
Amazon mechanical turk: Gold mine or coal mine?
Gender and Dialect Bias in YouTube's Automatic Captions.
A standalone RFID Indoor Positioning System Using Passive Tags Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is ba...
Model-based periodic event-triggered control for linear systems Periodic event-triggered control (PETC) is a control strategy that combines ideas from conventional periodic sampled-data control and event-triggered control. By communicating periodically sampled sensor and controller data only when needed to guarantee stability or performance properties, PETC is capable of reducing the number of transmissions significantly, while still retaining a satisfactory closed-loop behavior. In this paper, we will study observer-based controllers for linear systems and propose advanced event-triggering mechanisms (ETMs) that will reduce communication in both the sensor-to-controller channels and the controller-to-actuator channels. By exploiting model-based computations, the new classes of ETMs will outperform existing ETMs in the literature. To model and analyze the proposed classes of ETMs, we present two frameworks based on perturbed linear and piecewise linear systems, leading to conditions for global exponential stability and @?"2-gain performance of the resulting closed-loop systems in terms of linear matrix inequalities. The proposed analysis frameworks can be used to make tradeoffs between the network utilization on the one hand and the performance in terms of @?"2-gains on the other. In addition, we will show that the closed-loop performance realized by an observer-based controller, implemented in a conventional periodic time-triggered fashion, can be recovered arbitrarily closely by a PETC implementation. This provides a justification for emulation-based design. Next to centralized model-based ETMs, we will also provide a decentralized setup suitable for large-scale systems, where sensors and actuators are physically distributed over a wide area. The improvements realized by the proposed model-based ETMs will be demonstrated using numerical examples.
Hierarchical mesh segmentation based on fitting primitives In this paper, we describe a hierarchical face clustering algorithm for triangle meshes based on fitting primitives belonging to an arbitrary set. The method proposed is completely automatic, and generates a binary tree of clusters, each of which is fitted by one of the primitives employed. Initially, each triangle represents a single cluster; at every iteration, all the pairs of adjacent clusters are considered, and the one that can be better approximated by one of the primitives forms a new single cluster. The approximation error is evaluated using the same metric for all the primitives, so that it makes sense to choose which is the most suitable primitive to approximate the set of triangles in a cluster.Based on this approach, we have implemented a prototype that uses planes, spheres and cylinders, and have experimented that for meshes made of 100 K faces, the whole binary tree of clusters can be built in about 8 s on a standard PC.The framework described here has natural application in reverse engineering processes, but it has also been tested for surface denoising, feature recovery and character skinning.
2 Algorithms For Constructing A Delaunay Triangulation This paper provides a unified discussion of the Delaunay triangulation. Its geometric properties are reviewed and several applications are discussed. Two algorithms are presented for constructing the triangulation over a planar set ofN points. The first algorithm uses a divide-and-conquer approach. It runs inO(N logN) time, which is asymptotically optimal. The second algorithm is iterative and requiresO(N2) time in the worst case. However, its average case performance is comparable to that of the first algorithm.
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
A multi-objective and PSO based energy efficient path design for mobile sink in wireless sensor networks. Data collection through mobile sink (MS) in wireless sensor networks (WSNs) is an effective solution to the hot-spot or sink-hole problem caused by multi-hop routing using the static sink. Rendezvous point (RP) based MS path design is a common and popular technique used in this regard. However, design of the optimal path is a well-known NP-hard problem. Therefore, an evolutionary approach like multi-objective particle swarm optimization (MOPSO) can prove to be a very promising and reasonable approach to solve the same. In this paper, we first present a Linear Programming formulation for the stated problem and then, propose an MOPSO-based algorithm to design an energy efficient trajectory for the MS. The algorithm is presented with an efficient particle encoding scheme and derivation of a proficient multi-objective fitness function. We use Pareto dominance in MOPSO for obtaining both local and global best guides for each particle. We carry out rigorous simulation experiments on the proposed algorithm and compare the results with two existing algorithms namely, tree cluster based data gathering algorithm (TCBDGA) and energy aware sink relocation (EASR). The results demonstrate that the proposed algorithm performs better than both of them in terms of various performance metrics. The results are also validated through the statistical test, analysis of variance (ANOVA) and its least significant difference (LSD) post hoc analysis.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.24
0.24
0.24
0.24
0.12
0.01
0
0
0
0
0
0
0
0
Multiuser Joint Task Offloading and Resource Optimization in Proximate Clouds Proximate cloud computing enables computationally intensive applications on mobile devices, providing a rich user experience. However, remote resource bottlenecks limit the scalability of offloading, requiring optimization of the offloading decision and resource utilization. To this end, in this paper, we leverage the variability in capabilities of mobile devices and user preferences. Our system u...
BeCome: Blockchain-Enabled Computation Offloading for IoT in Mobile Edge Computing Benefiting from the real-time processing ability of edge computing, computing tasks requested by smart devices in the Internet of Things are offloaded to edge computing devices (ECDs) for implementation. However, ECDs are often overloaded or underloaded with disproportionate resource requests. In addition, during the process of task offloading, the transmitted information is vulnerable, which can result in data incompleteness. In view of this challenge, a blockchain-enabled computation offloading method, named BeCome, is proposed in this article. Blockchain technology is employed in edge computing to ensure data integrity. Then, the nondominated sorting genetic algorithm III is adopted to generate strategies for balanced resource allocation. Furthermore, simple additive weighting and multicriteria decision making are utilized to identify the optimal offloading strategy. Finally, performance evaluations of BeCome are given through simulation experiments.
Mobile cloud computing [Guest Edotorial] Mobile Cloud Computing refers to an infrastructure where both the data storage and the data processing occur outside of the mobile device. Mobile cloud applications move the computing power and data storage away from mobile devices and into the cloud, bringing applications and mobile computing not only to smartphone users but also to a much broader range of mobile subscribers.
Optimization of Radio and Computational Resources for Energy Efficiency in Latency-Constrained Application Offloading Providing femto-access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smart-phones to the so called femto-cloud. Such offloading promises to be beneficial in terms of battery saving at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process are compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading are optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes as a particular case the minimization of the total consumed energy without latency constraints.
Joint Computation Offloading and Resource Allocation for MEC-Enabled IoT Systems With Imperfect CSI Mobile-edge computing (MEC) is considered as a promising technology to reduce the energy consumption (EC) and task accomplishment latency of smart mobile user equipments (UEs) by offloading computation-intensive tasks to the nearby MEC servers. However, the Quality of Experience (QoE) for computation highly depends on the wireless channel conditions when computation tasks are offloaded to MEC servers. In this article, by considering the imperfect channel-state information (CSI), we study the joint offloading decision, transmit power, and computation resources to minimize the weighted sum of EC of all UEs while guaranteeing the probabilistic constraint in multiuser MEC-enabled Internet-of-Things (IoT) networks. This formulated optimization problem is a stochastic mixed-integer nonconvex problem and challenging to solve. To deal with it, we develop a low-complexity two-stage algorithm. In the first stage, we solve the relaxed version of the original problem to obtain offloading priorities of all UEs. In the second stage, we solve an iterative optimization problem to obtain a suboptimal offloading decision. As both stages include solving a series of nonconvex stochastic problems, we present a constrained stochastic successive convex approximation-based algorithm to obtain a near-optimal solution with low complexity. The numerical results demonstrate that the proposed algorithm provides comparable performance to existing approaches.
Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users...
A survey on ear biometrics Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non-contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion, earprint forensics, ear symmetry, ear classification, and ear individuality. This article provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers.
DeepFace: Closing the Gap to Human-Level Performance in Face Verification In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.
Communication theory of secrecy systems THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.1 In this paper a theory of secrecy systems is developed. The approach is on a theoretical level and is intended to complement the treatment found in standard works on cryptography.2 There, a detailed study is made of the many standard types of codes and ciphers, and of the ways of breaking them. We will be more concerned with the general mathematical structure and properties of secrecy systems.
A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization In recent years, there has been a growing interest for the experimental analysis in the field of evolutionary algorithms. It is noticeable due to the existence of numerous papers which analyze and propose different types of problems, such as the basis for experimental comparisons of algorithms, proposals of different methodologies in comparison or proposals of use of different statistical techniques in algorithms’ comparison.In this paper, we focus our study on the use of statistical techniques in the analysis of evolutionary algorithms’ behaviour over optimization problems. A study about the required conditions for statistical analysis of the results is presented by using some models of evolutionary algorithms for real-coding optimization. This study is conducted in two ways: single-problem analysis and multiple-problem analysis. The results obtained state that a parametric statistical analysis could not be appropriate specially when we deal with multiple-problem results. In multiple-problem analysis, we propose the use of non-parametric statistical tests given that they are less restrictive than parametric ones and they can be used over small size samples of results. As a case study, we analyze the published results for the algorithms presented in the CEC’2005 Special Session on Real Parameter Optimization by using non-parametric test procedures.
Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion This article presents the results of video-based Human Robot Interaction (HRI) trials which investigated people's perceptions of different robot appearances and associated attention-seeking features and behaviors displayed by robots with different appearance and behaviors. The HRI trials studied the participants' preferences for various features of robot appearance and behavior, as well as their personality attributions towards the robots compared to their own personalities. Overall, participants tended to prefer robots with more human-like appearance and attributes. However, systematic individual differences in the dynamic appearance ratings are not consistent with a universal effect. Introverts and participants with lower emotional stability tended to prefer the mechanical looking appearance to a greater degree than other participants. It is also shown that it is possible to rate individual elements of a particular robot's behavior and then assess the contribution, or otherwise, of that element to the overall perception of the robot by people. Relating participants' dynamic appearance ratings of individual robots to independent static appearance ratings provided evidence that could be taken to support a portion of the left hand side of Mori's theoretically proposed `uncanny valley' diagram. Suggestions for future work are outlined.
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
A multi-objective and PSO based energy efficient path design for mobile sink in wireless sensor networks. Data collection through mobile sink (MS) in wireless sensor networks (WSNs) is an effective solution to the hot-spot or sink-hole problem caused by multi-hop routing using the static sink. Rendezvous point (RP) based MS path design is a common and popular technique used in this regard. However, design of the optimal path is a well-known NP-hard problem. Therefore, an evolutionary approach like multi-objective particle swarm optimization (MOPSO) can prove to be a very promising and reasonable approach to solve the same. In this paper, we first present a Linear Programming formulation for the stated problem and then, propose an MOPSO-based algorithm to design an energy efficient trajectory for the MS. The algorithm is presented with an efficient particle encoding scheme and derivation of a proficient multi-objective fitness function. We use Pareto dominance in MOPSO for obtaining both local and global best guides for each particle. We carry out rigorous simulation experiments on the proposed algorithm and compare the results with two existing algorithms namely, tree cluster based data gathering algorithm (TCBDGA) and energy aware sink relocation (EASR). The results demonstrate that the proposed algorithm performs better than both of them in terms of various performance metrics. The results are also validated through the statistical test, analysis of variance (ANOVA) and its least significant difference (LSD) post hoc analysis.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
0
0
Redundancy resolution of the human arm and an upper limb exoskeleton. The human arm has 7 degrees of freedom (DOF) while only 6 DOF are required to position the wrist and orient the palm. Thus, the inverse kinematics of an human arm has a nonunique solution. Resolving this redundancy becomes critical as the human interacts with a wearable robot and the inverse kinematics solution of these two coupled systems must be identical to guarantee an seamless integration. The redundancy of the arm can be formulated by defining the swivel angle, the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Analyzing reaching tasks recorded with a motion capture system indicates that the swivel angle is selected such that when the elbow joint is flexed, the palm points to the head. Based on these experimental results, a new criterion is formed to resolve the human arm redundancy. This criterion was implemented into the control algorithm of an upper limb 7-DOF wearable robot. Experimental results indicate that by using the proposed redundancy resolution criterion, the error between the predicted and the actual swivel angle adopted by the motor control system is less then 5°.
Parametrization and Range of Motion of the Ball-and-Socket Joint The ball-and-socket joint model is used to represent articulations with three rotational degrees of free- dom (DOF), such as the human shoulder and the hip. The goal of this paper is to discuss two related prob- lems: the parametrization and the definition of realistic joint boundaries for ball-and-socket joints. Doing this accurately is difficult, yet important for motion generators (such as inverse kinematics and dynamics engines) and for motion manipulators (such as motion retargeting), since the resulting motions should satisfy the anatomic constraints. The difficulty mainly comes from the complex nature of 3D orientations and of human articulations. The underlying question of parametrization must be addressed before realis- tic and meaningful boundaries can be defined over the set of 3D orientations. In this paper, we review and compare several known methods, and advocate the use of the swing-and-twist parametrization, that parti- tions an arbitrary orientation into two meaningful components. The related problem of induced twist is discussed. Finally, we review some joint boundaries representations based on this decomposition, and show an example.
Positional kinematics of humanoid arms We present the positional abilities of a humanoid manipulator based on an improved kinematical model of the human arm. This was synthesized from electro-optical measurements of healthy female and male subjects. The model possesses three joints: inner shoulder joint, outer shoulder joint and elbow joint. The first functions as the human sternoclavicular joint, the second functions as the human glenohumeral joint, and the last replicates the human humeroulnar rotation. There are three links included, the forearm and the upper arm link which are of a constant length, and the shoulder link which is expandable. Mathematical interrelations between the joint coordinates are also taken into consideration. We determined the reachability of a humanoid arm, treated its orienting redundancy in the shoulder complex and the positional redundancy in the shoulder-elbow complexes, and discussed optimum configurations in executing different tasks. The results are important for the design and control of humanoid robots, in medicine and sports.
Design of a Bio-Inspired Wearable Exoskeleton for Applications in Robotics In this paper we explain the methodology we adopted to design the kinematics structure of a multi-contact points haptic interface. We based our concept on the analysis of the human arm anatomy and kinematics with the intend to synthesize a system that will be able to interface with the human limb in a very natural way. We proposed a simplified kinematic model of the human arm using a notation coming from the robotics field. To find out the best kinematics architecture we employed real movement data, measured from a human subject, and integrated them with the kinematic model of the exoskeleton, this allow us to test the system before its construction and to formalize specific requirements. We also implemented and tested a first passive version of the shoulder joint.
A Minimal Set Of Coordinates For Describing Humanoid Shoulder Motion The kinematics of the anatomical shoulder are analysed and modelled as a parallel mechanism similar to a Stewart platform. A new method is proposed to describe the shoulder kinematics with minimal coordinates and solve the indeterminacy. The minimal coordinates are defined from bony landmarks and the scapulothoracic kinematic constraints. Independent from one another, they uniquely characterise the shoulder motion. A humanoid mechanism is then proposed with identical kinematic properties. It is then shown how minimal coordinates can be obtained for this mechanism and how the coordinates simplify both the motion-planning task and trajectory-tracking control. Lastly, the coordinates are also shown to have an application in the field of biomechanics where they can be used to model the scapulohumeral rhythm.
Elbow Musculoskeletal Model for Industrial Exoskeleton with Modulated Impedance Based on Operator's Arm Stiffness.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading. Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation.
Experiment-driven Characterization of Full-Duplex Wireless Systems We present an experiment-based characterization of passive suppression and active self-interference cancellation mechanisms in full-duplex wireless communication systems. In particular, we consider passive suppression due to antenna separation at the same node, and active cancellation in analog and/or digital domain. First, we show that the average amount of cancellation increases for active cance...
IntrospectiveViews: an interface for scrutinizing semantic user models User models are a key component for user-adaptive systems They represent information about users such as interests, expertise, goals, traits, etc This information is used to achieve various adaptation effects, e.g., recommending relevant documents or products To ensure acceptance by users, these models need to be scrutable, i.e., users must be able to view and alter them to understand and if necessary correct the assumptions the system makes about the user However, in most existing systems, this goal is not met In this paper, we introduce IntrospectiveViews, an interface that enables the user to view and edit her user model Furthermore, we present the results of a formative evaluation that show the importance users give in general to different aspects of scrutable user models and also substantiate our claim that IntrospectiveViews is an appropriate realization of an interface to such models.
Finite-approximation-error-based discrete-time iterative adaptive dynamic programming. In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The ...
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.122
0.12
0.12
0.12
0.12
0.12
0.004
0
0
0
0
0
0
0
Design and Planning of a Multiple-Charger Multiple-Port Charging System for PEV Charging Station. Investment of charging facilities is facing deficit problems in many countries at the initial development stage of plug-in electric vehicles (PEVs). In this paper, we study the charging facility planning problem faced by a PEV charging station investor who aims to serve PEV customers with random behaviors and demands (but follow a series of predicted distributions) with lower economic costs of bot...
Optimizing the Deployment of Electric Vehicle Charging Stations Using Pervasive Mobility Data. With the recent advances in battery technology and the resulting decrease in the charging times, public charging stations are becoming a viable option for Electric Vehicle (EV) drivers. Concurrently, emergence and the wide-spread use of location-tracking devices in mobile phones and wearable devices has paved the way to track individual-level human movements to an unprecedented spatial and temporal grain. Motivated by these developments, we propose a novel methodology to perform data-driven optimization of EV charging station locations. We formulate the problem as a discrete optimization problem on a geographical grid, with the objective of covering the entire demand region while minimizing a measure of drivers’ total excess driving distance to reach charging stations, the related energy overhead, and the number of charging stations. Since optimally solving the problem is computationally infeasible, we present computationally efficient solutions based on the genetic algorithm. We then apply the proposed methodology to optimize EV charging stations layout in the city of Boston, starting from Call Detail Records (CDR) of one million users over the span of 4 months. The results show that the genetic algorithm provides solutions that significantly reduce drivers’ excess driving distance to charging stations, energy overhead, and the number of charging stations required compared to both a locally-optimized feasible solution and the current charging station deployment in the Boston metro area. We further investigate the robustness of the proposed methodology and show that building upon well-known regularity of aggregate human mobility patterns, the layout computed for demands based on the single day movements preserves its advantage also in later days and months. When collectively considered, the results presented in this paper indicate the potential of data-driven approaches for optimally placing public charging facilities at urban scale.
Optimal Planning Of Pev Charging Station With Single Output Multiple Cables Charging Spots Coordinated charging can alter the profile of plug-in electric vehicle charging load and reduce the required amount of charging spots by encouraging customers to use charging spots at off-peak hours. Therefore, real-time coordinated charging should be considered at the planning stage. To enhance charging station's utilization and save corresponding investment costs by incorporating coordinated charging, a new charging spot model, namely single output multiple cables charging spot (SOMC spot), is designed in this paper. A two-stage stochastic programming model is developed for planning a public parking lot charging station equipped with SOMC spots. The first stage of the programming model is planning of SOMC spots and its objective is to obtain an optimal configuration of the charging station to minimize the station's equivalent annual costs, including investment and operation costs. The second stage of the programming model involves a probabilistic simulation procedure, in which coordinated charging is simulated, so that the influence of coordinated charging on the planning is considered. A case study of a residential parking lot charging station verifies the effectiveness of the proposed planning model. And the proposed coordinated charging for SOMC spots shows great potential in saving equivalent annual costs for providing charging services.
Optimal Electric Vehicle Fast Charging Station Placement Based on Game Theoretical Framework. To reduce the air pollution and improve the energy efficiency, many countries and cities (e.g., Singapore) are on the way of introducing electric vehicles (EVs) to replace the vehicles serving in current traffic system. Effective placement of charging stations is essential for the rapid development of EVs, because it is necessary for providing convenience for EVs and ensuring the efficiency of the...
Optimal sizing of PEV fast charging stations with Markovian demand characterization Fast charging stations are critical infrastructures to enable high penetration of plug-in electric vehicles (PEVs) into future distribution networks. They need to be carefully planned to meet charging demand as well as ensure economic benefits. Accurate estimation of PEV charging demand is the prerequisite of such planning, but a nontrivial task. This paper addresses the sizing (number of chargers...
Time-Efficient Target Tags Information Collection in Large-Scale RFID Systems By integrating the micro-sensor on RFID tags to obtain the environment information, the sensor-augmented RFID system greatly supports the applications that are sensitive to environment. To quickly collect the information from all tags, many researchers dedicate on well arranging tag replying orders to avoid the signal collisions. Compared to from all tags, collecting information from a part of tag...
A survey on ear biometrics Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non-contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion, earprint forensics, ear symmetry, ear classification, and ear individuality. This article provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers.
DeepFace: Closing the Gap to Human-Level Performance in Face Verification In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.
Markov games as a framework for multi-agent reinforcement learning In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
Pors: proofs of retrievability for large files In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
On controller initialization in multivariable switching systems We consider a class of switched systems which consists of a linear MIMO and possibly unstable process in feedback interconnection with a multicontroller whose dynamics switch. It is shown how one can achieve significantly better transient performance by selecting the initial condition for every controller when it is inserted into the feedback loop. This initialization is obtained by performing the minimization of a quadratic cost function of the tracking error, controlled output, and control signal. We guarantee input-to-state stability of the closed-loop system when the average number of switches per unit of time is smaller than a specific value. If this is not the case then stability can still be achieved by adding a mild constraint to the optimization. We illustrate the use of our results in the control of a flexible beam actuated in torque. This system is unstable with two poles at the origin and contains several lightly damped modes, which can be easily excited by controller switching.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
A Mathematical Framework For Measuring Network Flexibility In the field of networking research, increased flexibility of new system architecture proposals, protocols, or algorithms is often stated to be a competitive advantage over its existing counterparts. However, this advantage is usually claimed only on an argumentative level and neither formally supported nor thoroughly investigated due to the lack of a unified flexibility framework. As we will show in this paper, the flexibility achieved by a system implementation can be measured, which consequently can be used to make different networking solutions quantitatively comparable with each other. The idea behind our mathematical model is to relate network flexibility to the achievable subset of the set of all possible demand changes, and to use measure theory to quantify it. As increased flexibility might come with additional system complexity and cost, our framework provides a cost model which measures how expensive it is to operate a flexible system. The introduced flexibility framework contains different normalization strategies to provide intuitive meaning to the network flexibility value as well, and also provides guidelines for generating demand changes with (non)uniform demand utilities. Finally, our network flexibility framework is applied on two different use-cases, and the benefits of a quantitative flexibility analysis compared to pure intuitive arguments are demonstrated.
Minimum interference routing of bandwidth guaranteed tunnels with MPLS traffic engineering applications This paper presents new algorithms for dynamic routing of bandwidth guaranteed tunnels, where tunnel routing requests arrive one by one and there is no a priori knowledge regarding future requests. This problem is motivated by the service provider needs for fast deployment of bandwidth guaranteed services. Offline routing algorithms cannot be used since they require a priori knowledge of all tunnel requests that are to be rooted. Instead, on-line algorithms that handle requests arriving one by one and that satisfy as many potential future demands as possible are needed. The newly developed algorithms are on-line algorithms and are based on the idea that a newly routed tunnel must follow a route that does not “interfere too much” with a route that may he critical to satisfy a future demand. We show that this problem is NP-hard. We then develop path selection heuristics which are based on the idea of deferred loading of certain “critical” links. These critical links are identified by the algorithm as links that, if heavily loaded, would make it impossible to satisfy future demands between certain ingress-egress pairs. Like min-hop routing, the presented algorithm uses link-state information and some auxiliary capacity information for path selection. Unlike previous algorithms, the proposed algorithm exploits any available knowledge of the network ingress-egress points of potential future demands, even though the demands themselves are unknown. If all nodes are ingress-egress nodes, the algorithm can still be used, particularly to reduce the rejection rate of requests between a specified subset of important ingress-egress pairs. The algorithm performs well in comparison to previously proposed algorithms on several metrics like the number of rejected demands and successful rerouting of demands upon link failure
The set cover with pairs problem We consider a generalization of the set cover problem, in which elements are covered by pairs of objects, and we are required to find a minimum cost subset of objects that induces a collection of pairs covering all elements. Formally, let U be a ground set of elements and let ${\cal S}$ be a set of objects, where each object i has a non-negative cost wi. For every $\{ i, j \} \subseteq {\cal S}$, let ${\cal C}(i,j)$ be the collection of elements in U covered by the pair { i, j }. The set cover with pairs problem asks to find a subset $A \subseteq {\cal S}$ such that $\bigcup_{ \{ i, j \} \subseteq A } {\cal C}(i,j) = U$ and such that ∑i∈Awi is minimized. In addition to studying this general problem, we are also concerned with developing polynomial time approximation algorithms for interesting special cases. The problems we consider in this framework arise in the context of domination in metric spaces and separation of point sets.
Towards a flexible functional split for cloud-RAN networks Very dense deployments of small cells are one of the key enablers to tackle the ever-growing demand on mobile bandwidth. In such deployments, centralization of RAN functions on cloud resources is envisioned to overcome severe inter-cell interference and to keep costs acceptable. However, RAN back-haul constraints need to be considered when designing the functional split between RAN front-ends and centralized equipment. In this paper we analyse constraints and outline applications of flexible RAN centralization.
Another Price to Pay: An Availability Analysis for SDN Virtualization with Network Hypervisors Communication networks are embracing the software defined networking (SDN) paradigm. Its architectural shift assumes that a remote SDN controller (SDNC) in the control plane is responsible for configuring the underlying devices of the forwarding plane. In order to support flexibility-motivated network slicing, SDN-based networks employ another entity in the control plane, a network hypervisor (NH). This paper first discusses different protection strategies for the control plane with NHs and presents the corresponding availability models, which assume possible failures of links and nodes in the forwarding plane and the control plane. An analysis of these protection alternatives is then performed so as to compare average control plane availability, average path length for the control communication that traverses NH, and infrastructure resources required to support them. Our results confirm the intuition that the NH introduction generally results in a reduction of the control plane availability, which stresses the need for appropriate protection. However, the availability achieved by each of the considered strategies is impacted differently by the node availability and the link failure probability, thus calling for a careful selection that is based on the infrastructure features.
Cerberus: The Power of Choices in Datacenter Topology Design - A Throughput Perspective AbstractThe bandwidth and latency requirements of modern datacenter applications have led researchers to propose various topology designs using static, dynamic demand-oblivious (rotor), and/or dynamic demand-aware switches. However, given the diverse nature of datacenter traffic, there is little consensus about how these designs would fare against each other. In this work, we analyze the throughput of existing topology designs under different traffic patterns and study their unique advantages and potential costs in terms of bandwidth and latency ''tax''. To overcome the identified inefficiencies, we propose Cerberus, a unified, two-layer leaf-spine optical datacenter design with three topology types. Cerberus systematically matches different traffic patterns with their most suitable topology type: e.g., latency-sensitive flows are transmitted via a static topology, all-to-all traffic via a rotor topology, and elephant flows via a demand-aware topology. We show analytically and in simulations that Cerberus can improve throughput significantly compared to alternative approaches and operate datacenters at higher loads while being throughput-proportional.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Constrained Multiobjective Optimization for IoT-Enabled Computation Offloading in Collaborative Edge and Cloud Computing Internet-of-Things (IoT) applications are becoming more resource-hungry and latency-sensitive, which are severely constrained by limited resources of current mobile hardware. Mobile cloud computing (MCC) can provide abundant computation resources, while mobile-edge computing (MEC) aims to reduce the transmission latency by offloading complex tasks from IoT devices to nearby edge servers. It is sti...
Supervisory control of fuzzy discrete event systems: a formal approach. Fuzzy discrete event systems (DESs) were proposed recently by Lin and Ying [19], which may better cope with the real-world problems of fuzziness, impreciseness, and subjectivity such as those in biomedicine. As a continuation of [19], in this paper, we further develop fuzzy DESs by dealing with supervisory control of fuzzy DESs. More specifically: 1) we reformulate the parallel composition of crisp DESs, and then define the parallel composition of fuzzy DESs that is equivalent to that in [19]. Max-product and max-min automata for modeling fuzzy DESs are considered, 2) we deal with a number of fundamental problems regarding supervisory control of fuzzy DESs, particularly demonstrate controllability theorem and nonblocking controllability theorem of fuzzy DESs, and thus, present the conditions for the existence of supervisors in fuzzy DESs; 3) we analyze the complexity for presenting a uniform criterion to test the fuzzy controllability condition of fuzzy DESs modeled by max-product automata; in particular, we present in detail a general computing method for checking whether or not the fuzzy controllability condition holds, if max-min automata are used to model fuzzy DESs, and by means of this method we can search for all possible fuzzy states reachable from initial fuzzy state in max-min automata. Also, we introduce the fuzzy n-controllability condition for some practical problems, and 4) a number of examples serving to illustrate the applications of the derived results and methods are described; some basic properties related to supervisory control of fuzzy DESs are investigated. To conclude, some related issues are raised for further consideration.
The industrial indoor channel: large-scale and temporal fading at 900, 2400, and 5200 MHz In this paper, large-scale fading and temporal fading characteristics of the industrial radio channel at 900, 2400, and 5200 MHz are determined. In contrast to measurements performed in houses and in office buildings, few attempts have been made until now to model propagation in industrial environments. In this paper, the industrial environment is categorized into different topographies. Industrial topographies are defined separately for large-scale and temporal fading, and their definition is based upon the specific physical characteristics of the local surroundings affecting both types of fading. Large-scale fading is well expressed by a one-slope path-loss model and excellent agreement with a lognormal distribution is obtained. Temporal fading is found to be Ricean and Ricean K-factors have been determined. Ricean K-factors are found to follow a lognormal distribution.
Placing Virtual Machines to Optimize Cloud Gaming Experience Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider's total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Distributed Kalman consensus filter with event-triggered communication: Formulation and stability analysis. •The problem of distributed state estimation in sensor networks with event-triggered communication schedules on both sensor-to-estimator channel and estimator-to-estimator channel is studied.•An event-triggered KCF is designed by deriving the optimal Kalman gain matrix which minimizes the mean squared error.•A computational scalable form of the proposed filter is presented by some approximations.•An appropriate choice of the consensus gain matrix is provided to ensure the stochastic stability of the proposed filter.
Higher Order Tensor Decomposition For Proportional Myoelectric Control Based On Muscle Synergies Muscle synergies have recently been utilised in myoelectric control systems. Thus far, all proposed synergy-based systems rely on matrix factorisation methods. However, this is limited in terms of task-dimensionality. Here, the potential application of higher-order tensor decomposition as a framework for proportional myoelectric control is demonstrated. A novel constrained Tucker decomposition (consTD) technique of synergy extraction is proposed for synergy-based myoelectric control model and compared with state-of-the-art matrix factorisation models. The extracted synergies were used to estimate control signals for the wrist?s Degree of Freedom (DoF) through direct projection. The consTD model was able to estimate the control signals for each DoF by utilising all data in one 3rd-order tensor. This is contrast with matrix factorisation models where data are segmented for each DoF and then the synergies often have to be realigned. Moreover, the consTD method offers more information by providing additional shared synergies, unlike matrix factorisation methods. The extracted control signals were fed to a ridge regression to estimate the wrist's kinematics based on real glove data. The Coefficient of Determination (R-2) for the reconstructed wrist position showed that the proposed consTD was higher than matrix factorisation methods. In sum, this study provides the first proof of concept for the use of higher-order tensor decomposition in proportional myoelectric control and it highlights the potential of tensors to provide an objective and direct approach to identify synergies.
1.1
0.1
0.1
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
0
Development of a UAV-LiDAR System with Application to Forest Inventory We present the development of a low-cost Unmanned Aerial Vehicle-Light Detecting and Ranging (UAV-LiDAR) system and an accompanying workflow to produce 3D point clouds. UAV systems provide an unrivalled combination of high temporal and spatial resolution datasets. The TerraLuma UAV-LiDAR system has been developed to take advantage of these properties and in doing so overcome some of the current limitations of the use of this technology within the forestry industry. A modified processing workflow including a novel trajectory determination algorithm fusing observations from a GPS receiver, an Inertial Measurement Unit (IMU) and a High Definition (HD) video camera is presented. The advantages of this workflow are demonstrated using a rigorous assessment of the spatial accuracy of the final point clouds. It is shown that due to the inclusion of video the horizontal accuracy of the final point cloud improves from 0.61 m to 0.34 m (RMS error assessed against ground control). The effect of the very high density point clouds (up to 62 points per m(2)) produced by the UAV-LiDAR system on the measurement of tree location, height and crown width are also assessed by performing repeat surveys over individual isolated trees. The standard deviation of tree height is shown to reduce from 0.26 m, when using data with a density of 8 points per m(2), to 0.15 m when the higher density data was used. Improvements in the uncertainty of the measurement of tree location, 0.80 m to 0.53 m, and crown width, 0.69 m to 0.61 m are also shown.
Chimp optimization algorithm. •A novel optimizer called Chimp Optimization Algorithm (ChOA) is proposed.•ChOA is inspired by individual intelligence and sexual motivation of chimps.•ChOA alleviates the problems of slow convergence rate and trapping in local optima.•The four main steps of Chimp hunting are implemented.
Three-Dimensional Path Planning for Uninhabited Combat Aerial Vehicle Based on Predator-Prey Pigeon-Inspired Optimization in Dynamic Environment Three-dimension path planning of uninhabited combat aerial vehicle (UCAV) is a complicated optimal problem, which mainly focused on optimizing the flight route considering the different types of constrains under complex combating environment. A novel predator-prey pigeon-inspired optimization (PPPIO) is proposed to solve the UCAV three-dimension path planning problem in dynamic environment. Pigeon-inspired optimization (PIO) is a new bio-inspired optimization algorithm. In this algorithm, map and compass operator model and landmark operator model are used to search the best result of a function. The prey-predator concept is adopted to improve global best properties and enhance the convergence speed. The characteristics of the optimal path are presented in the form of a cost function. The comparative simulation results show that our proposed PPPIO algorithm is more efficient than the basic PIO, particle swarm optimization (PSO) and different evolution (DE) in solving UCAV three-dimensional path planning problems
Roadmap-Based Path Planning - Using the Voronoi Diagram for a Clearance-Based Shortest Path Path planning still remains one of the core problems in modern robotic applications, such as the design of autonomous vehicles and perceptive systems. The basic path-planning problem is concerned with finding a good-quality path from a source point to a destination point that does not result in collision with any obstacles. In this article, we chose the roadmap approach and utilized the Voronoi diagram to obtain a path that is a close approximation of the shortest path satisfying the required clearance value set by the user. The advantage of the proposed technique versus alternative path-planning methods is in its simplicity, versatility, and efficiency.
A parallel compact cuckoo search algorithm for three-dimensional path planning The three-dimensional (3D) path planning of unmanned robots focuses on avoiding collisions with obstacles and finding an optimized path to the target location in a complex three-dimensional environment. An improved cuckoo search algorithm based on compact and parallel techniques for three-dimensional path planning problems is proposed. This paper implements the compact cuckoo search algorithm, and then, a new parallel communication strategy is proposed. The compact scheme can effectively save the memory of the unmanned robot. The parallel scheme can increase the accuracy and achieve faster convergence. The proposed algorithm is tested on several selected functions and three-dimensional path planning. Results compared with other methods show that the proposed algorithm can provide more competitive results and achieve more efficient execution.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Communication in reactive multiagent robotic systems Multiple cooperating robots are able to complete many tasks more quickly and reliably than one robot alone. Communication between the robots can multiply their capabilities and effectiveness, but to what extent? In this research, the importance of communication in robotic societies is investigated through experiments on both simulated and real robots. Performance was measured for three different types of communication for three different tasks. The levels of communication are progressively more complex and potentially more expensive to implement. For some tasks, communication can significantly improve performance, but for others inter-agent communication is apparently unnecessary. In cases where communication helps, the lowest level of communication is almost as effective as the more complex type. The bulk of these results are derived from thousands of simulations run with randomly generated initial conditions. The simulation results help determine appropriate parameters for the reactive control system which was ported for tests on Denning mobile robots.
Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-Art In the nearly six decades since researchers began to explore methods of creating them, exoskeletons have progressed from the stuff of science fiction to nearly commercialized products. While there are still many challenges associated with exoskeleton development that have yet to be perfected, the advances in the field have been enormous. In this paper, we review the history and discuss the state-of-the-art of lower limb exoskeletons and active orthoses. We provide a design overview of hardware, actuation, sensory, and control systems for most of the devices that have been described in the literature, and end with a discussion of the major advances that have been made and hurdles yet to be overcome.
A Model Predictive Control Approach to Microgrid Operation Optimization. Microgrids are subsystems of the distribution grid, which comprises generation capacities, storage devices, and controllable loads, operating as a single controllable system either connected or isolated from the utility grid. In this paper, we present a study on applying a model predictive control approach to the problem of efficiently optimizing microgrid operations while satisfying a time-varying request and operation constraints. The overall problem is formulated using mixed-integer linear programming (MILP), which can be solved in an efficient way by using commercial solvers without resorting to complex heuristics or decompositions techniques. Then, the MILP formulation leads to significant improvements in solution quality and computational burden. A case study of a microgrid is employed to assess the performance of the online optimization-based control strategy and the simulation results are discussed. The method is applied to an experimental microgrid located in Athens, Greece. The experimental results show the feasibility and the effectiveness of the proposed approach.
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
0
Neural Combinatorial Deep Reinforcement Learning for Age-optimal Joint Trajectory and Scheduling Design in UAV-assisted Networks In this article, an unmanned aerial vehicle (UAV)-assisted wireless network is considered in which a battery-constrained UAV is assumed to move towards energy-constrained ground nodes to receive status updates about their observed processes. The UAV’s flight trajectory and scheduling of status updates are jointly optimized with the objective of minimizing the normalized weighted sum of Age of Info...
Energy- and Spectral-Efficiency Tradeoff for Distributed Antenna Systems with Proportional Fairness Energy efficiency(EE) has caught more and more attention in future wireless communications due to steadily rising energy costs and environmental concerns. In this paper, we propose an EE scheme with proportional fairness for the downlink multiuser distributed antenna systems (DAS). Our aim is to maximize EE, subject to constraints on overall transmit power of each remote access unit (RAU), bit-error rate (BER), and proportional data rates. We exploit multi-criteria optimization method to systematically investigate the relationship between EE and spectral efficiency (SE). Using the weighted sum method, we first convert the multi-criteria optimization problem, which is extremely complex, into a simpler single objective optimization problem. Then an optimal algorithm is developed to allocate the available power to balance the tradeoff between EE and SE. We also demonstrate the effectiveness of the proposed scheme and illustrate the fundamental tradeoff between energy- and spectral-efficient transmission through computer simulation.
Efficient multi-task allocation and path planning for unmanned surface vehicle in support of ocean operations. Presently, there is an increasing interest in the deployment of unmanned surface vehicles (USVs) to support complex ocean operations. In order to carry out these missions in a more efficient way, an intelligent hybrid multi-task allocation and path planning algorithm is required and has been proposed in this paper. In terms of the multi-task allocation, a novel algorithm based upon a self-organising map (SOM) has been designed and developed. The main contribution is that an adaptive artificial repulsive force field has been constructed and integrated into the SOM to achieve collision avoidance capability. The new algorithm is able to fast and effectively generate a sequence for executing multiple tasks in a cluttered maritime environment involving numerous obstacles. After generating an optimised task execution sequence, a path planning algorithm based upon fast marching square (FMS) is utilised to calculate the trajectories. Because of the introduction of a safety parameter, the FMS is able to adaptively adjust the dimensional influence of an obstacle and accordingly generate the paths to ensure the safety of the USV. The algorithms have been verified and evaluated through a number of computer based simulations and has been proven to work effectively in both simulated and practical maritime environments. (C) 2017 Elsevier B.V. All rights reserved.
Cooperative Internet of UAVs: Distributed Trajectory Design by Multi-Agent Deep Reinforcement Learning Due to the advantages of flexible deployment and extensive coverage, unmanned aerial vehicles (UAVs) have significant potential for sensing applications in the next generation of cellular networks, which will give rise to a cellular Internet of UAVs. In this article, we consider a cellular Internet of UAVs, where the UAVs execute sensing tasks through cooperative sensing and transmission to minimize the age of information (AoI). However, the cooperative sensing and transmission is tightly coupled with the UAVs' trajectories, which makes the trajectory design challenging. To tackle this challenge, we propose a distributed sense-and-send protocol, where the UAVs determine the trajectories by selecting from a discrete set of tasks and a continuous set of locations for sensing and transmission. Based on this protocol, we formulate the trajectory design problem for AoI minimization and propose a compound-action actor-critic (CA2C) algorithm to solve it based on deep reinforcement learning. The CA2C algorithm can learn the optimal policies for actions involving both continuous and discrete variables and is suited for the trajectory design. Our simulation results show that the CA2C algorithm outperforms four baseline algorithms. Also, we show that by dividing the tasks, cooperative UAVs can achieve a lower AoI compared to non-cooperative UAVs.
Prediction-Based Delay Optimization Data Collection Algorithm for Underwater Acoustic Sensor Networks The past years have seen a rapid development of autonomous underwater vehicle-aided (AUV-aided) data-gathering schemes in underwater acoustic sensor networks (UASNs). The use of AUVs efficiently reduces energy consumption of sensor nodes. However, all AUV-aided solutions face severe problems in data collection delay, especially in a large-scale network. In this paper, to reduce data collection delay, we propose a prediction-based delay optimization data collection algorithm (PDO-DC). On the contrary to the traditional delay optimization algorithms, Kernel Ridge Regression (KRR) is utilized via cluster member nodes to obtain the corresponding prediction models. Then, the AUV can obtain all cluster data by traversing less cluster head nodes, which can effectively reduce the collection delay of the AUV. The experimental results demonstrate that the proposed method is both feasible and effective.
Bus network assisted drone scheduling for sustainable charging of wireless rechargeable sensor network Wireless Rechargeable Sensor Network (WRSN) is largely used in monitoring of environment and traffic, video surveillance and medical care, etc., and helps to improve the quality of urban life. However, it is challenging to provide the sustainable energy for sensors deployed in buildings, soil or other places, where it is hard to harvest the energy from environment. To address this issue, we design a new wireless charging system, which levers the bus network assisted drone in urban areas. We formulate the drone scheduling problem based on this new wireless charging system to minimize the total time cost of drone subject to all sensors can be charged under the energy constraint of drone. Then, we propose an approximation algorithm DSA for the energy tightened drone scheduling problem. To make the tasks of WRSN sustainable, we further formulate the drone scheduling problem with deadlines of sensors, and present the approximation algorithm DDSA to find the drone schedule with the maximal number of sensors charged by the drone before deadlines. Through the extensive simulations, we demonstrate that DSA can reduce the total time cost by 84.83% compared with Greedy Replenished Energy algorithm, and uses at most 5.98 times of the total time cost of optimal solution on average. Then, we also demonstrate that DDSA can increase the survival rate of sensors by 51.95% compared with Deadline Greedy Replenished Energy algorithm, and can obtain 77.54% survival rate of optimal solution on average.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Sequence to Sequence Learning with Neural Networks. Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Toward Integrating Vehicular Clouds with IoT for Smart City Services Vehicular ad hoc networks, cloud computing, and the Internet of Things are among the emerging technology enablers offering a wide array of new application possibilities in smart urban spaces. These applications consist of smart building automation systems, healthcare monitoring systems, and intelligent and connected transportation, among others. The integration of IoT-based vehicular technologies will enrich services that are eventually going to ignite the proliferation of exciting and even more advanced technological marvels. However, depending on different requirements and design models for networking and architecture, such integration needs the development of newer communication architectures and frameworks. This work proposes a novel framework for architectural and communication design to effectively integrate vehicular networking clouds with IoT, referred to as VCoT, to materialize new applications that provision various IoT services through vehicular clouds. In this article, we particularly put emphasis on smart city applications deployed, operated, and controlled through LoRaWAN-based vehicular networks. LoraWAN, being a new technology, provides efficient and long-range communication possibilities. The article also discusses possible research issues in such an integration including data aggregation, security, privacy, data quality, and network coverage. These issues must be addressed in order to realize the VCoT paradigm deployment, and to provide insights for investors and key stakeholders in VCoT service provisioning. The article presents deep insights for different real-world application scenarios (i.e., smart homes, intelligent traffic light, and smart city) using VCoT for general control and automation along with their associated challenges. It also presents initial insights, through preliminary results, regarding data and resource management in IoT-based resource constrained environments through vehicular clouds.
Distributed multirobot localization In this paper, we present a new approach to the problem of simultaneously localizing a group of mobile robots capable of sensing one another. Each of the robots collects sensor data regarding its own motion and shares this information with the rest of the team during the update cycles. A single estimator, in the form of a Kalman filter, processes the available positioning information from all the members of the team and produces a pose estimate for every one of them. The equations for this centralized estimator can be written in a decentralized form, therefore allowing this single Kalman filter to be decomposed into a number of smaller communicating filters. Each of these filters processes the sensor data collected by its host robot. Exchange of information between the individual filters is necessary only when two robots detect each other and measure their relative pose. The resulting decentralized estimation schema, which we call collective localization, constitutes a unique means for fusing measurements collected from a variety of sensors with minimal communication and processing requirements. The distributed localization algorithm is applied to a group of three robots and the improvement in localization accuracy is presented. Finally, a comparison to the equivalent decentralized information filter is provided.
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
Collaborative Mobile Charging The limited battery capacity of sensor nodes has become one of the most critical impediments that stunt the deployment of wireless sensor networks (WSNs). Recent breakthroughs in wireless energy transfer and rechargeable lithium batteries provide a promising alternative to power WSNs: mobile vehicles/robots carrying high volume batteries serve as mobile chargers to periodically deliver energy to sensor nodes. In this paper, we consider how to schedule multiple mobile chargers to optimize energy usage effectiveness, such that every sensor will not run out of energy. We introduce a novel charging paradigm, collaborative mobile charging, where mobile chargers are allowed to intentionally transfer energy between themselves. To provide some intuitive insights into the problem structure, we first consider a scenario that satisfies three conditions, and propose a scheduling algorithm, PushWait, which is proven to be optimal and can cover a one-dimensional WSN of infinite length. Then, we remove the conditions one by one, investigating chargers' scheduling in a series of scenarios ranging from the most restricted one to a general 2D WSN. Through theoretical analysis and simulations, we demonstrate the advantages of the proposed algorithms in energy usage effectiveness and charging coverage.
OSMnx: New Methods for Acquiring, Constructing, Analyzing, and Visualizing Complex Street Networks. Urban scholars have studied street networks in various ways, but there are data availability and consistency limitations to the current urban planning/street network analysis literature. To address these challenges, this article presents OSMnx, a new tool to make the collection of data and creation and analysis of street networks simple, consistent, automatable and sound from the perspectives of graph theory, transportation, and urban design. OSMnx contributes five significant capabilities for researchers and practitioners: first, the automated downloading of political boundaries and building footprints; second, the tailored and automated downloading and constructing of street network data from OpenStreetMap; third, the algorithmic correction of network topology; fourth, the ability to save street networks to disk as shapefiles, GraphML, or SVG files; and fifth, the ability to analyze street networks, including calculating routes, projecting and visualizing networks, and calculating metric and topological measures. These measures include those common in urban design and transportation studies, as well as advanced measures of the structure and topology of the network. Finally, this article presents a simple case study using OSMnx to construct and analyze street networks in Portland, Oregon.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
0
Latency-Aware Path Planning for Disconnected Sensor Networks With Mobile Sinks Data collection with mobile elements can greatly improve the load balance degree and accordingly prolong the longevity for wireless sensor networks (WSNs). In this pattern, a mobile sink generally traverses the sensing field periodically and collect data from multiple Anchor Points (APs) which constitute a traveling tour. However, due to long-distance traveling, this easily causes large latency of data delivery. In this paper, we propose a path planning strategy of mobile data collection, called the Dual Approximation of Anchor Points (DAAP), which aims to achieve full connectivity for partitioned WSNs and construct a shorter path. DAAP is novel in two aspects. On the one hand, it is especially designed for disconnected WSNs where sensor nodes are scattered in multiple isolated segments. On the other hand, it has the least calculational complexity compared with other existing works. DAAP is formulated as a location approximation problem and then solved by a greedy location selection mechanism, which follows two corresponding principles. On the one hand, the APs of periphery segments must be as near the network center as possible. On the other hand, the APs of other isolated segments must be as close to the current path as possible. Finally, experimental results confirm that DAAP outperforms existing works in delay-tough applications.
An Energy-Balanced Heuristic for Mobile Sink Scheduling in Hybrid WSNs. Wireless sensor networks (WSNs) are integrated as a pillar of collaborative Internet of Things (IoT) technologies for the creation of pervasive smart environments. Generally, IoT end nodes (or WSN sensors) can be mobile or static. In this kind of hybrid WSNs, mobile sinks move to predetermined sink locations to gather data sensed by static sensors. Scheduling mobile sinks energy-efficiently while ...
On Theoretical Modeling of Sensor Cloud: A Paradigm Shift From Wireless Sensor Network. This paper focuses on the theoretical modeling of sensor cloud, which is one of the first attempts in this direction. We endeavor to theoretically characterize virtualization, which is a fundamental mechanism for operations within the sensor-cloud architecture. Existing related research works on sensor cloud have primarily focused on the ideology and the challenges that wireless sensor network (WS...
Artificial Intelligence-Driven Mechanism for Edge Computing-Based Industrial Applications Due to various challenging issues such as, computational complexity and more delay in cloud computing, edge computing has overtaken the conventional process by efficiently and fairly allocating the resources i.e., power and battery lifetime in Internet of things (IoT)-based industrial applications. In the meantime, intelligent and accurate resource management by artificial intelligence (AI) has become the center of attention especially in industrial applications. With the coordination of AI at the edge will remarkably enhance the range and computational speed of IoT-based devices in industries. But the challenging issue in these power hungry, short battery lifetime, and delay-intolerant portable devices is inappropriate and inefficient classical trends of fair resource allotment. Also, it is interpreted through extensive industrial datasets that dynamic wireless channel could not be supported by the typical power saving and battery lifetime techniques, for example, predictive transmission power control (TPC) and baseline. Thus, this paper proposes 1) a forward central dynamic and available approach (FCDAA) by adapting the running time of sensing and transmission processes in IoT-based portable devices; 2) a system-level battery model by evaluating the energy dissipation in IoT devices; and 3) a data reliability model for edge AI-based IoT devices over hybrid TPC and duty-cycle network. Two important cases, for instance, static (i.e., product processing) and dynamic (i.e., vibration and fault diagnosis) are introduced for proper monitoring of industrial platform. Experimental testbed reveals that the proposed FCDAA enhances energy efficiency and battery lifetime at acceptable reliability (∼0.95) by appropriately tuning duty cycle and TPC unlike conventional methods.
Towards Big data processing in IoT: Path Planning and Resource Management of UAV Base Stations in Mobile-Edge Computing System. Heavy data load and wide cover range have always been crucial problems for big data processing in Internet of Things (IoT). Recently, mobile-edge computing (MEC) and unmanned aerial vehicle base stations (UAV-BSs) have emerged as promising techniques in IoT. In this article, we propose a three-layer online data processing network based on the MEC technique. On the bottom layer, raw data are generated by distributed sensors with local information. Upon them, UAV-BSs are deployed as moving MEC servers, which collect data and conduct initial steps of data processing. On top of them, a center cloud receives processed results and conducts further evaluation. For online processing requirements, the edge nodes should stabilize delay to ensure data freshness. Furthermore, limited onboard energy poses constraints to edge processing capability. In this article, we propose an online edge processing scheduling algorithm based on Lyapunov optimization. In cases of low data rate, it tends to reduce edge processor frequency for saving energy. In the presence of a high data rate, it will smartly allocate bandwidth for edge data offloading. Meanwhile, hovering UAV-BSs bring a large and flexible service coverage, which results in a path planning issue. In this article, we also consider this problem and apply deep reinforcement learning to develop an online path planning algorithm. Taking observations of around environment as an input, a CNN network is trained to predict action rewards. By simulations, we validate its effectiveness in enhancing service coverage. The result will contribute to big data processing in future IoT.
Survey of Fog Computing: Fundamental, Network Applications, and Research Challenges. Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog n...
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
An online mechanism for multi-unit demand and its application to plug-in hybrid electric vehicle charging We develop an online mechanism for the allocation of an expiring resource to a dynamic agent population. Each agent has a non-increasing marginal valuation function for the resource, and an upper limit on the number of units that can be allocated in any period. We propose two versions on a truthful allocation mechanism. Each modifies the decisions of a greedy online assignment algorithm by sometimes cancelling an allocation of resources. One version makes this modification immediately upon an allocation decision while a second waits until the point at which an agent departs the market. Adopting a prior-free framework, we show that the second approach has better worst-case allocative efficiency and is more scalable. On the other hand, the first approach (with immediate cancellation) may be easier in practice because it does not need to reclaim units previously allocated. We consider an application to recharging plug-in hybrid electric vehicles (PHEVs). Using data from a real-world trial of PHEVs in the UK, we demonstrate higher system performance than a fixed price system, performance comparable with a standard, but non-truthful scheduling heuristic, and the ability to support 50% more vehicles at the same fuel cost than a simple randomized policy.
Blockchain Meets IoT: An Architecture for Scalable Access Management in IoT. The Internet of Things (IoT) is stepping out of its infancy into full maturity and establishing itself as a part of the future Internet. One of the technical challenges of having billions of devices deployed worldwide is the ability to manage them. Although access management technologies exist in IoT, they are based on centralized models which introduce a new variety of technical limitations to ma...
Multi-column Deep Neural Networks for Image Classification Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
Segmentation-Based Image Copy-Move Forgery Detection Scheme In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Adding Social Comparison to Open Learner Modeling Achieving multiplication table fluency is a major concern during all primary school years and for many pupils it is considered a challenge. `Rote memory' approaches that have been deployed for years in the traditional school curriculum have contributed to the prevailing assumption of pupils that Mathematics is unpleasant and uninviting. This paper presents a game-based approach to assess and gradually improve multiplication skills by combining an adaptive mechanism for identifying and resolving pupil weaknesses, while exposing parts of the learner model to the user through easily perceivable visualizations. Moreover, the game offers social comparison information and pupils can access progress data about their peers or their entire class a feature that is expected to improve self-reflection, allow for self-regulated learning and increase user motivation. This paper also presents the feedback received by the preliminary testing of the game with a representative sample of pupils and discusses the effect of allowing access to the progress of peers and summative class scores.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Exponential Stability of Homogeneous Positive Systems of Degree One With Time-Varying Delays While the asymptotic stability of positive linear systems in the presence of bounded time delays has been thoroughly investigated, the theory for nonlinear positive systems is considerably less well-developed. This technical note presents a set of conditions for establishing delay-independent stability and bounding the decay rate of a significant class of nonlinear positive systems which includes positive linear systems as a special case. Specifically, when the time delays have a known upper bound, we derive necessary and sufficient conditions for exponential stability of: a) continuous-time positive systems whose vector fields are homogeneous and cooperative and b) discrete-time positive systems whose vector fields are homogeneous and order-preserving. We then present explicit expressions that allow us to quantify the impact of delays on the decay rate and show that the best decay rate of positive linear systems that our bounds provide can be found via convex optimization. Finally, we extend the results to general linear systems with time-varying delays.
On the Decay Rates of Homogeneous Positive Systems of Any Degree With Time-Varying Delays. This technical note studies the stability problem of homogeneous positive systems of any degree with time-varying delays. Delay-independent conditions are derived for asymptotic and finite-time stability. Estimates on the decay rates, which reveal how the system delays affect the rates of convergence, are obtained. More precisely, this technical note features three contributions. First, we derive a necessary and sufficient condition for global polynomial stability of continuous-time homogeneous cooperative systems with time-varying delays when the degree of homogeneity is greater than one. Second, we characterize finite-time stability of continuous-time homogeneous cooperative delay-free systems of degree smaller than one. Finally, for discrete-time positive systems with time-varying delays, a local exponential stability criterion is established when the vector fields are order-preserving and homogeneous of degree greater than one. An illustrative example is given to show the effectiveness of our results.
Generalized dilations and numerically solving discrete-time homogeneous optimization problems We introduce generalized dilations, a broader class of operators than that of dilations, and consider homogeneity with respect to this new class of dilations. For discrete-time systems that are asymptotically controllable and homogeneous (with degree zero) we propose a method to numerically approximate any homogeneous value function (solution to an infinite horizon optimization problem) to arbitrary accuracy. We also show that the method can be used to generate an offline computed stabilizing feedback law.
On the Stabilizability and Consensus of Positive Homogeneous Multi-Agent Dynamical Systems. In this note we consider a supervisory control scheme that achieves either asymptotic stability or consensus for a group of homogenous agents described by a positive state-space model. Each agent is modeled by means of the same SISO positive state-space model, and the supervisory controller, representing the information exchange among the agents, is implemented via a static output feedback. Necessary and sufficient conditions for the asymptotic stability, or the consensus of all agents, are derived under the positivity constraint.
Asynchronous Output Feedback Control for a Class of Conic-Type Nonlinear Hidden Markov Jump Systems Within a Finite-Time Interval This article focuses on the finite-time asynchronous output feedback control scheme for a class of Markov jump systems subject to external disturbances and nonlinearities. The conic-type nonlinearities hold a constraint condition which locates in a known hyper-sphere with an indefinite center. In addition, the asynchronization phenomenon occurs between the system and the controller, which can be r...
Input-To-State Stability Analysis For Homogeneous Hybrid Systems With Bounded Time-Varying Delays This paper studies the problem of the input-to-state stability for homogeneous hybrid systems with bounded time-varying delays. First, some homogeneous concepts and properties are introduced and applied to hybrid systems with bounded time-varying delays. Furthermore, by using Lyapunov-Razumikhin approach, some sufficient conditions are extended to the hybrid systems to analyze the input-to-state stability, and with the homogeneous assumption, some special results can be obtained. Finally, numerical examples are given to illustrate the applicability and the effectiveness of the proposed results.
Finite-Time Stability of Continuous Autonomous Systems Finite-time stability is defined for equilibria of continuous but non-Lipschitzian autonomous systems. Continuity, Lipschitz continuity, and Hölder continuity of the settling-time function are studied and illustrated with several examples. Lyapunov and converse Lyapunov results involving scalar differential inequalities are given for finite-time stability. It is shown that the regularity properties of the Lyapunov function and those of the settling-time function are related. Consequently, converse Lyapunov results can only assure the existence of continuous Lyapunov functions. Finally, the sensitivity of finite-time-stable systems to perturbations is investigated.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Numerical Comparison of Some Penalty-Based Constraint Handling Techniques in Genetic Algorithms We study five penalty function-based constraint handling techniques to be used with genetic algorithms in global optimization. Three of them, the method of superiority of feasible points, the method of parameter free penalties and the method of adaptive penalties have already been considered in the literature. In addition, we introduce two new modifications of these methods. We compare all the five methods numerically in 33 test problems and report and analyze the results obtained in terms of accuracy, efficiency and reliability. The method of adaptive penalties turned out to be most efficient while the method of parameter free penalties was the most reliable.
Development of a UAV-LiDAR System with Application to Forest Inventory We present the development of a low-cost Unmanned Aerial Vehicle-Light Detecting and Ranging (UAV-LiDAR) system and an accompanying workflow to produce 3D point clouds. UAV systems provide an unrivalled combination of high temporal and spatial resolution datasets. The TerraLuma UAV-LiDAR system has been developed to take advantage of these properties and in doing so overcome some of the current limitations of the use of this technology within the forestry industry. A modified processing workflow including a novel trajectory determination algorithm fusing observations from a GPS receiver, an Inertial Measurement Unit (IMU) and a High Definition (HD) video camera is presented. The advantages of this workflow are demonstrated using a rigorous assessment of the spatial accuracy of the final point clouds. It is shown that due to the inclusion of video the horizontal accuracy of the final point cloud improves from 0.61 m to 0.34 m (RMS error assessed against ground control). The effect of the very high density point clouds (up to 62 points per m(2)) produced by the UAV-LiDAR system on the measurement of tree location, height and crown width are also assessed by performing repeat surveys over individual isolated trees. The standard deviation of tree height is shown to reduce from 0.26 m, when using data with a density of 8 points per m(2), to 0.15 m when the higher density data was used. Improvements in the uncertainty of the measurement of tree location, 0.80 m to 0.53 m, and crown width, 0.69 m to 0.61 m are also shown.
Dynamic Management of Virtual Infrastructures Cloud infrastructures are becoming an appropriate solution to address the computational needs of scientific applications. However, the use of public or on-premises Infrastructure as a Service (IaaS) clouds requires users to have non-trivial system administration skills. Resource provisioning systems provide facilities to choose the most suitable Virtual Machine Images (VMI) and basic configuration of multiple instances and subnetworks. Other tasks such as the configuration of cluster services, computational frameworks or specific applications are not trivial on the cloud, and normally users have to manually select the VMI that best fits, including undesired additional services and software packages. This paper presents a set of components that ease the access and the usability of IaaS clouds by automating the VMI selection, deployment, configuration, software installation, monitoring and update of Virtual Appliances. It supports APIs from a large number of virtual platforms, making user applications cloud-agnostic. In addition it integrates a contextualization system to enable the installation and configuration of all the user required applications providing the user with a fully functional infrastructure. Therefore, golden VMIs and configuration recipes can be easily reused across different deployments. Moreover, the contextualization agent included in the framework supports horizontal (increase/decrease the number of resources) and vertical (increase/decrease resources within a running Virtual Machine) by properly reconfiguring the software installed, considering the configuration of the multiple resources running. This paves the way for automatic virtual infrastructure deployment, customization and elastic modification at runtime for IaaS clouds.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.060017
0.05875
0.05
0.05
0.05
0.05
0.003116
0
0
0
0
0
0
0
A New Efficient Medical Image Cipher Based On Hybrid Chaotic Map And Dna Code In this paper, we propose a novel medical image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and a new hybrid chaotic map. Our study uses DNA sequences and operations and the chaotic hybrid map to strengthen the cryptosystem. The significant advantages of this approach consist in improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext, has a large key space and a high key sensitivity, and is able to resist against the statistical and exhaustive attacks.
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
Genetic Optimization Of Radial Basis Probabilistic Neural Networks This paper discusses using genetic algorithms (CA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed CA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed CA is also suitable for optimizing the radial basis function neural networks (RBFNN).
A novel data hiding for color images based on pixel value difference and modulus function This paper proposes a novel data hiding method using pixel-value difference and modulus function for color image with the large embedding capacity(hiding 810757 bits in a 512 512 host image at least) and a high-visual-quality of the cover image. The proposed method has fully taken into account the correlation of the R, G and B plane of a color image. The amount of information embedded the R plane and the B plane determined by the difference of the corresponding pixel value between the G plane and the median of G pixel value in each pixel block. Furthermore, two sophisticated pixel value adjustment processes are provided to maintain the division consistency and to solve underflow and overflow problems. The most importance is that the secret data are completely extracted through the mathematical theoretical proof.
Quality optimized medical image information hiding algorithm that employs edge detection and data coding. HighlightsA method for embedding patient's information into medical image is proposed.Two coding methods have been utilized to embed the EPR and improve imperceptibility.Cost optimization function is contributed to enhance the quality of the stego image.The proposed system is robust against textural feature steganalysis. ObjectivesThe present work has the goal of developing a secure medical imaging information system based on a combined steganography and cryptography technique. It attempts to securely embed patient's confidential information into his/her medical images. MethodsThe proposed information security scheme conceals coded Electronic Patient Records (EPRs) into medical images in order to protect the EPRs' confidentiality without affecting the image quality and particularly the Region of Interest (ROI), which is essential for diagnosis. The secret EPR data is converted into ciphertext using private symmetric encryption method. Since the Human Visual System (HVS) is less sensitive to alterations in sharp regions compared to uniform regions, a simple edge detection method has been introduced to identify and embed in edge pixels, which will lead to an improved stego image quality. In order to increase the embedding capacity, the algorithm embeds variable number of bits (up to 3) in edge pixels based on the strength of edges. Moreover, to increase the efficiency, two message coding mechanisms have been utilized to enhance the ¿1 steganography. The first one, which is based on Hamming code, is simple and fast, while the other which is known as the Syndrome Trellis Code (STC), is more sophisticated as it attempts to find a stego image that is close to the cover image through minimizing the embedding impact. The proposed steganography algorithm embeds the secret data bits into the Region of Non Interest (RONI), where due to its importance; the ROI is preserved from modifications. ResultsThe experimental results demonstrate that the proposed method can embed large amount of secret data without leaving a noticeable distortion in the output image. The effectiveness of the proposed algorithm is also proven using one of the efficient steganalysis techniques. ConclusionThe proposed medical imaging information system proved to be capable of concealing EPR data and producing imperceptible stego images with minimal embedding distortions compared to other existing methods. In order to refrain from introducing any modifications to the ROI, the proposed system only utilizes the Region of Non Interest (RONI) in embedding the EPR data.
Secure data hiding techniques: a survey This article presents a detailed discussion of different prospects of digital image watermarking. This discussion of watermarking included: brief comparison of similar information security techniques, concept of watermark embedding and extraction process, watermark characteristics and applications, common types of watermarking techniques, major classification of watermarking attacks, brief summary of various secure watermarking techniques. Further, potential issues and some existing solutions are provided. Furthermore, the performance comparisons of the discussed techniques are presented in tabular format. Authors believe that this article contribution will provide as catalyst for potential researchers to implement efficient watermarking systems.
Cross-plane colour image encryption using a two-dimensional logistic tent modular map Chaotic systems are suitable for image encryption owing to their numerous intrinsic characteristics. However, chaotic maps and algorithmic structures employed in many existing chaos-based image encryption algorithms exhibit various shortcomings. To overcome these, in this study, we first construct a two-dimensional logistic tent modular map (2D-LTMM) and then develop a new colour image encryption algorithm (CIEA) using the 2D-LTMM, which is referred to as the LTMM-CIEA. Compared with the existing chaotic maps used for image encryption, the 2D-LTMM has a fairly wide and continuous chaotic range and more uniformly distributed trajectories. The LTMM-CIEA employs cross-plane permutation and non-sequential diffusion to obtain the diffusion and confusion properties. The cross-plane permutation concurrently shuffles the row and column positions of pixels within the three colour planes, and the non-sequential diffusion method processes the pixels in a secret and random order. The main contributions of this study are the construction of the 2D-LTMM to overcome the shortcomings of existing chaotic maps and the development of the LTMM-CIEA to concurrently encrypt the three colour planes of images. Simulation experiments and security evaluations show that the 2D-LTMM outperforms recently developed chaotic maps, and the LTMM-CIEA outperforms several state-of-the-art image encryption algorithms in terms of security.
LSB based non blind predictive edge adaptive image steganography. Image steganography is the art of hiding secret message in grayscale or color images. Easy detection of secret message for any state-of-art image steganography can break the stego system. To prevent the breakdown of the stego system data is embedded in the selected area of an image which reduces the probability of detection. Most of the existing adaptive image steganography techniques achieve low embedding capacity. In this paper a high capacity Predictive Edge Adaptive image steganography technique is proposed where selective area of cover image is predicted using Modified Median Edge Detector (MMED) predictor to embed the binary payload (data). The cover image used to embed the payload is a grayscale image. Experimental results show that the proposed scheme achieves better embedding capacity with minimum level of distortion and higher level of security. The proposed scheme is compared with the existing image steganography schemes. Results show that the proposed scheme achieves better embedding rate with lower level of distortion.
Model-based periodic event-triggered control for linear systems Periodic event-triggered control (PETC) is a control strategy that combines ideas from conventional periodic sampled-data control and event-triggered control. By communicating periodically sampled sensor and controller data only when needed to guarantee stability or performance properties, PETC is capable of reducing the number of transmissions significantly, while still retaining a satisfactory closed-loop behavior. In this paper, we will study observer-based controllers for linear systems and propose advanced event-triggering mechanisms (ETMs) that will reduce communication in both the sensor-to-controller channels and the controller-to-actuator channels. By exploiting model-based computations, the new classes of ETMs will outperform existing ETMs in the literature. To model and analyze the proposed classes of ETMs, we present two frameworks based on perturbed linear and piecewise linear systems, leading to conditions for global exponential stability and @?"2-gain performance of the resulting closed-loop systems in terms of linear matrix inequalities. The proposed analysis frameworks can be used to make tradeoffs between the network utilization on the one hand and the performance in terms of @?"2-gains on the other. In addition, we will show that the closed-loop performance realized by an observer-based controller, implemented in a conventional periodic time-triggered fashion, can be recovered arbitrarily closely by a PETC implementation. This provides a justification for emulation-based design. Next to centralized model-based ETMs, we will also provide a decentralized setup suitable for large-scale systems, where sensors and actuators are physically distributed over a wide area. The improvements realized by the proposed model-based ETMs will be demonstrated using numerical examples.
An optimal parallel algorithm for the minimum circle-cover problem Given a set of n circular arcs, the problem of finding a minimum number of circular arcs whose union covers the whole circle has been considered both in sequential and parallel computational models. Here we present a parallel algorithm in the EREW PRAM model that runs in O(log n) time using O(n) processors if the arcs are not given already sorted, and using O(n/log n) processors otherwise. Our algorithm is optimal since the problem has an Ω(n log n) lower bound for the unsorted-arcs case, and an Ω(n) lower bound for the sorted-arcs case. The previous best known parallel algorithm runs in O(log n) time using O(n2) processors, in the worst case, in the CREW PRAM model.
Improved Schemes for Visual Cryptography A (k,n)-threshold visual cryptography scheme ((k,n)-threshold VCS, for short) is a method to encode a secret image SI into n shadow images called shares such that any k or more shares enable the “visual” recovery of the secret image, but by inspecting less than k shares one cannot gain any information on the secret image. The “visual” recovery consists of xeroxing the shares onto transparencies, and then stacking them. Any k shares will reveal the secret image without any cryptographic computation.In this paper we analyze visual cryptography schemes in which the reconstruction of black pixels is perfect, that is, all the subpixels associated to a black pixel are black. For any value of k and n, where 2\leq k\leq n, we give a construction for (k,n)-threshold VCS which improves on the best previously known constructions with respect to the pixel expansion (i.e., the number of subpixels each pixel of the original image is encoded into). We also provide a construction for coloured (2,n)-threshold VCS and for coloured (n,n)-threshold VCS. Both constructions improve on the best previously known constructions with respect to the pixel expansion.
On ear-based human identification in the mid-wave infrared spectrum In this paper the problem of human ear recognition in the Mid-wave infrared (MWIR) spectrum is studied in order to illustrate the advantages and limitations of the ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible (baseline) and mid-wave IR left and right profile face images. Profile face images were collected using a high definition mid-wave IR camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based, ear recognition system is proposed that is designed and developed to perform real-time human identification. The proposed system tests several feature extraction methods, namely: (i) intensity-based such as independent component analysis (ICA), principal component analysis (PCA), and linear discriminant analysis (LDA); (ii) shape-based such as scale invariant feature transform (SIFT); as well as (iii) texture-based such as local binary patterns (LBP), and local ternary patterns (LTP). Experimental results suggest that LTP (followed by LBP) yields the best performance (Rank1=80:68%) on manually segmented ears and (Rank1=68:18%) on ear images that are automatically detected and segmented. By fusing the matching scores obtained by LBP and LTP, the identification performance increases by about 5%. Although these results are promising, the outcomes of our study suggest that the design and development of automated ear-based recognition systems that can operate efficiently in the lower part of the passive IR spectrum are very challenging tasks.
Massive MIMO Antenna Selection: Switching Architectures, Capacity Bounds, and Optimal Antenna Selection Algorithms. Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, w...
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.05
0.05
0.05
0.05
0.05
0.05
0.05
0.025
0
0
0
0
0
0
A Survey of Intelligent Transportation Systems With the rapid increase of modern economical and technical development, the Intelligent Transportation System (ITS) becomes more and more important and essential for a country. Practice shows that relying only on the construction of transport infrastructure and expansion does not fundamentally solve the existing transportation problems and sometimes it even makes such problems more severe. So every country is actively exploring ITS technology to solve traffic problems. But due to the different situations of the fund investment, current technology merit and the various traffic problems for each country, the development level of ITS and research areas are distinct. This paper focuses on the comparison and analysis of international ITS research and integrates the ITS technologies to design an integration model. We regard the traffic problem as not only a problem for individual countries, but also a global topic. Countries should improve the technology communication, update and enhance ITS techniques.
Auto-Alert: A Spatial and Temporal Architecture for Driving Assistance in Road Traffic Environments Over the last decade, the Advanced Driver Assistance System (ADAS) concept has evolved prominently. ADAS involves several advanced approaches such as automotive electronics, vehicular communication, RADAR, LIDAR, computer vision, and its associated aspects such as machine learning and deep learning. Of these, computer vision and machine learning-based solutions have mainly been effective that have allowed real-time vehicle control, driver-aided systems, etc. However, most of the existing works deal with ADAS deployment and autonomous driving functionality in countries with well-disciplined lane traffic. These solutions and frameworks do not work in countries and cities with less-disciplined/ chaotic traffic. Hence, critical ADAS functionalities and even L2/ L3 autonomy levels in driving remain a major open challenge. In this regard, this work proposes a novel framework called Auto-Alert. Auto-Alert performs a two-stage spatial and temporal analysis based on external traffic environment and tri-axial sensor system for safe driving assistance. This work investigates time-series analysis with deep learning models for driving events prediction and assistance. Further, as a basic premise, various essential design considerations towards the ADAS are discussed. Significantly, the Convolutional Neural Network (CNN) and Long-Short-Term-Memory (LSTM) models are applied in the proposed Auto-Alert. It is shown that the LSTM outperforms the CNN with 99% for the considered window length. Importantly, this also involves developing and demonstrating an efficient traffic monitoring and density estimation system. Further, this work provides the benchmark results for Indian Driving Dataset (IDD), specifically for the object detection task. The findings of this proposed work demonstrate the significance of using CNN and LSTM networks to assist the driver in the holistic traffic environment.
Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol' variance decomposition, and fast probability integration. Desirable features of Monte Carlo analysis in conjunction with Latin hypercube sampling are described in discussions of the following topics: (i) properties of random, stratified and Latin hypercube sampling, (ii) comparisons of random and Latin hypercube sampling, (iii) operations involving Latin hypercube sampling (i.e. correlation control, reweighting of samples to incorporate changed distributions, replicated sampling to test reproducibility of results), (iv) uncertainty analysis (i.e. cumulative distribution functions, complementary cumulative distribution functions, box plots), (v) sensitivity analysis (i.e. scatterplots, regression analysis, correlation analysis, rank transformations, searches for nonrandom patterns), and (vi) analyses involving stochastic (i.e. aleatory) and subjective (i.e. epistemic) uncertainty.
Genetic Programming with Transfer Learning for Urban Traffic Modelling and Prediction Intelligent transportation is a cornerstone of smart cities' infrastructure. Its practical realisation has been attempted by various technological means (ranging from machine learning to evolutionary approaches), all aimed at informing urban decision making (e.g., road layout design), in environmentally and financially sustainable ways. In this paper, we focus on traffic modelling and prediction, both central to intelligent transportation. We formulate this challenge as a symbolic regression problem and solve it using Genetic Programming, which we enhance with a lag operator and transfer learning. The resulting algorithm utilises knowledge collected from other road segments in order to predict vehicle flow through a junction where traffic data are not available. The experimental results obtained on the Darmstadt case study show that our approach is successful at producing accurate models without increasing training time.
Machine Learning-Based Traffic Prediction Models For Intelligent Transportation Systems Intelligent Transportation Systems (ITS) have attracted an increasing amount of attention in recent years. Thanks to the fast development of vehicular computing hardware, vehicular sensors and citywide infrastructures, many impressive applications have been proposed under the topic of ITS, such as Vehicular Cloud (VC), intelligent traffic controls, etc. These applications can bring us a safer, more efficient, and also more enjoyable transportation environment. However, an accurate and efficient traffic flow prediction system is needed to achieve these applications, which creates an opportunity for applications under ITS to deal with the possible road situation in advance. To achieve better traffic flow prediction performance, many prediction methods have been proposed, such as mathematical modeling methods, parametric methods, and non-parametric methods. Among the non-parametric methods, the one of the most famous methods today is the Machine Learningbased (ML) method. It needs less prior knowledge about the relationship among different traffic patterns, less restriction on prediction tasks, and can better fit non-linear features in traffic data. There are several sub-classes under the ML method, such as regression model, kernel-based model, etc. For all these models, it is of vital importance that we choose an appropriate type of ML model before building up a prediction system. To do this, we should have a clear view of different ML methods; we investigate not only the accuracy of different models, but the applicable scenario and sometimes the specific type of problem the model was designed for. Therefore, in this paper, we are trying to build up a clear and thorough review of different ML models, and analyze the advantages and disadvantages of these ML models. In order to do this, different ML models will be categorized based on the ML theory they use. In each category, we will first give a short introduction of the ML theory they use, and we will focus on the specific changes made to the model when applied to different prediction problems. Meanwhile, we will also compare among different categories, which will help us to have a macro overview of what types of ML methods are good at what types of prediction tasks according to their unique model features. Furthermore, we review the useful add-ons used in traffic prediction, and last but not least, we discuss the open challenges in the traffic prediction field.
Deep Reinforcement Learning for Autonomous Driving: A Survey With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
Real-time system for monitoring driver vigilance This paper presents a nonintrusive prototype computer vision system for monitoring a driver's vigilance in real time. It is based on a hardware system for the real-time acquisition of a driver's images using an active IR illuminator and the software implementation for monitoring some visual behaviors that characterize a driver's level of vigilance. Six parameters are calculated: Percent eye closure (PERCLOS), eye closure duration, blink frequency, nodding frequency, face position, and fixed gaze. These parameters are combined using a fuzzy classifier to infer the level of inattentiveness of the driver. The use of multiple visual parameters and the fusion of these parameters yield a more robust and accurate inattention characterization than by using a single parameter. The system has been tested with different sequences recorded in night and day driving conditions in a motorway and with different users. Some experimental results and conclusions about the performance of the system are presented
Detection and Evaluation of Driver Distraction Using Machine Learning and Fuzzy Logic In addition to vehicle control, drivers often perform secondary tasks that impede driving. Reduction of driver distraction is an important challenge for the safety of intelligent transportation systems. In this paper, a methodology for the detection and evaluation of driver distraction while performing secondary tasks is described and an appropriate hardware and a software environment is offered and studied. The system includes a model of normal driving, a subsystem for measuring the errors from the secondary tasks, and a module for total distraction evaluation. A new machine learning algorithm defines driver performance in lane keeping and speed maintenance on a specific road segment. To recognize the errors, a method is proposed, which compares normal driving parameters with ones obtained while conducting a secondary task. To evaluate distraction, an effective fuzzy logic algorithm is used. To verify the proposed approach, a case study with driver-in-the-loop experiments was carried out, in which participants performed the secondary task, namely chatting on a cell phone. The results presented in this research confirm its capability to detect and to precisely measure a level of abnormal driver performance.
Shared Steering Control Using Safe Envelopes for Obstacle Avoidance and Vehicle Stability. Steer-by-wire technology enables vehicle safety systems to share control with a driver through augmentation of the driver&#39;s steering commands. Advances in sensing technologies empower these systems further with real-time information about the surrounding environment. Leveraging these advancements in vehicle actuation and sensing, the authors present a shared control framework for obstacle avoidanc...
A Tutorial On Visual Servo Control This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
Molé: A scalable, user-generated WiFi positioning engine. We describe the design, implementation, and evaluation of Molé, a mobile organic localisation engine. Unlike previous work on crowd-sourced WiFi positioning, Molé uses a hierarchical name space. By not relying on a map and by being more strict than uninterpreted names for places, Molé aims for a more flexible and scalable point in the design space of localisation systems. Molé employs several new techniques, including a new statistical positioning algorithm to differentiate between neighbouring places, a motion detector to reduce update lag, and a scalable ‘cloud’-based fingerprint distribution system. Molé's localisation algorithm, called Maximum Overlap MAO, accounts for temporal variations in a place's fingerprint in a principled manner. It also allows for aggregation of fingerprints from many users and is compact enough for on-device storage. We show through end-to-end experiments in two deployments that MAO is significantly more accurate than state-of-the-art Bayesian-based localisers. We also show that non-experts can use Molé to quickly survey a building, enabling room-grained location-based services for themselves and others.
Predicting Multi-step Citywide Passenger Demands Using Attention-based Neural Networks. Predicting passenger pickup/dropoff demands based on historical mobility trips has been of great importance towards better vehicle distribution for the emerging mobility-on-demand (MOD) services. Prior works focused on predicting next-step passenger demands at selected locations or hotspots. However, we argue that multi-step citywide passenger demands encapsulate both time-varying demand trends and global statuses, and hence are more beneficial to avoiding demand-service mismatching and developing effective vehicle distribution/scheduling strategies. In this paper, we propose an end-to-end deep neural network solution to the prediction task. We employ the encoder-decoder framework based on convolutional and ConvLSTM units to identify complex features that capture spatiotemporal influences and pickup-dropoff interactions on citywide passenger demands. A novel attention model is incorporated to emphasize the effects of latent citywide mobility regularities. We evaluate our proposed method using real-word mobility trips (taxis and bikes) and the experimental results show that our method achieves higher prediction accuracy than the adaptations of the state-of-the-art approaches.
Analyzing Software Rejuvenation Techniques in a Virtualized System: Service Provider and User Views Virtualization technology has promoted the fast development and deployment of cloud computing, and is now becoming an enabler of Internet of Everything. Virtual machine monitor (VMM), playing a critical role in a virtualized system, is software and hence it suffers from software aging after a long continuous running as well as software crashes due to elusive faults. Software rejuvenation techniques can be adopted to reduce the impact of software aging. Although there existed analytical model-based approaches for evaluating software rejuvenation techniques, none analyzed both application service (AS) availability and job completion time in a virtualized system with live virtual machine (VM) migration. This paper aims to quantitatively analyze software rejuvenation techniques from service provider and user views in a virtualized system deploying VMM reboot and live VM migration techniques for rejuvenation, under the condition that all the aging time, failure time, VMM fixing time and live VM migration time follow general distributions. We construct an analytical model by using a semi-Markov process (SMP) and derive formulas for calculating AS availability and job completion time. By analytical experiments, we can obtain the optimal migration trigger intervals for achieving the approximate maximum AS availability and the approximate minimum job completion time, and then service providers can make decisions for maximizing the benefits of service providers and users by adjusting parameter values.
1.072107
0.08
0.066667
0.066667
0.04
0.026667
0.006667
0.006667
0.002222
0
0
0
0
0
Leader-following control of high-order multi-agent systems under directed graphs: Pre-specified finite time approach. In this work we address the full state finite-time distributed consensus control problem for high-order multi-agent systems (MAS) under directed communication topology. Existing protocols for finite time consensus of MAS are normally based on the signum function or fractional power state feedback, and the finite convergence time is contingent upon the initial conditions and other design parameters. In this paper, by using regular local state feedback only, we present a distributed and smooth finite time control scheme to achieve leader–follower consensus under the communication topology containing a directed spanning tree. The proposed control consists of a finite time observer and a finite time compensator. The salient feature of the proposed method is that both the finite time intervals for observing leader states and for reaching consensus are independent of initial conditions and any other design parameters, thus can be explicitly pre-specified. Leader-following problem of MAS with both single and multiple leaders are studied.
Distributed Consensus Observer for Multi-Agent Systems With High-Order Integrator Dynamics This article presents a distributed consensus observer for multiagent systems with high-order integrator dynamics to estimate the leader state. Stability analysis is carefully studied to explore the convergence properties under undirected and directed communication, respectively. Using Lyapunov functions, fixed-time (resp. finite-time) stability is guaranteed for the undirected (resp. directed) interaction topology. Finally, simulation results are presented to demonstrate the theoretical findings.
Practical fixed-time consensus for integrator-type multi-agent systems: A time base generator approach A new practical fixed-time consensus framework for integrator-type multi-agent systems is developed by using a time base generator (TBG). For both leaderless and leader-following consensus, new TBG-based protocols are proposed for the multi-agent systems. The resulting settling time can be pre-designated without dependence on initial states. Different from some conventional fixed-time consensus strategies, where the magnitude of initial control input is large, the proposed TBG-based protocols significantly reduce the magnitude, which is demonstrated through comparison studies using illustrative examples.
Collective behavior of mobile agents with state-dependent interactions. In this paper, we develop a novel self-propelled particle model to describe the emergent behavior of a group of mobile agents. Each agent coordinates with its neighbors through a local force accounting for velocity alignment and collision avoidance. The interactions between agents are governed by path loss influence and state-dependent rules, which results in topology changes as well as discontinuities in the local forces. By using differential inclusion technique and algebraic graph theory, we show that collective behavior emerges while collisions between agents can be avoided, if the interaction topology is jointly connected. A trade-off between the path loss influence and connectivity condition to guarantee the collective behavior is discovered and discussed. Numerical simulations are given to validate the theoretical results.
Direct Adaptive Preassigned Finite-Time Control With Time-Delay and Quantized Input Using Neural Network. This paper investigates an adaptive finite-time control (FTC) problem for a class of strict-feedback nonlinear systems with both time-delays and quantized input from a new point of view. First, a new concept, called preassigned finite-time performance function (PFTF), is defined. Then, another novel notion, called practically preassigned finite-time stability (PPFTS), is introduced. With PFTF and PPFTS in hand, a novel sufficient condition of the FTC is given by using the neural network (NN) control and direct adaptive backstepping technique, which is different from the existing results. In addition, a modified barrier function is first introduced in this work. Moreover, this work is first to focus on the FTC for the situation that the time-delay and quantized input simultaneously exist in the nonlinear systems. Finally, simulation results are carried out to illustrate the effectiveness of the proposed scheme.
Dynamic Event-Triggered Scheduling and Platooning Control Co-Design for Automated Vehicles Over Vehicular Ad-Hoc Networks This paper deals with the co-design problem of event-triggered communication scheduling and platooning control over vehicular ad-hoc networks (VANETs) subject to finite communication resource. First, a unified model is presented to describe the coordinated platoon behavior of leader-follower vehicles in the simultaneous presence of unknown external disturbances and an unknown leader control input....
Fixed-Time Consensus Tracking for Multiagent Systems With High-Order Integrator Dynamics. This paper addresses the fixed-time leader-follower consensus problem for high-order integrator multiagent systems subject to matched external disturbances. A new cascade control structure, based on a fixed-time distributed observer, is developed to achieve the fixed-time consensus tracking control. A simulation example is included to show the efficacy and the performance of the proposed control structure with respect to different initial conditions.
A robust adaptive nonlinear control design An adaptive control design procedure for a class of nonlinear systems with both parametric uncertainty and unknown nonlinearities is presented. The unknown nonlinearities lie within some 'bounding functions', which are assumed to be partially known. The key assumption is that the uncertain terms satisfy a 'triangularity condition'. As illustrated by examples, the proposed design procedure expands the class of nonlinear systems for which global adaptive stabilization methods can be applied. The overall adaptive scheme is shown to guarantee global uniform ultimate boundedness.
A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms The interest in nonparametric statistical analysis has grown recently in the field of computational intelligence. In many experimental studies, the lack of the required properties for a proper application of parametric procedures–independence, normality, and homoscedasticity–yields to nonparametric ones the task of performing a rigorous comparison among algorithms.
Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading. Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation.
Fast, automatic and fine-grained tampered JPEG image detection via DCT coefficient analysis The quick advance in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may arise when a tampered image cannot be distinguished from a real one by visual examination. In this paper, we focus on JPEG images and propose detecting tampered images by examining the double quantization effect hidden among the discrete cosine transform (DCT) coefficients. To our knowledge, our approach is the only one to date that can automatically locate the tampered region, while it has several additional advantages: fine-grained detection at the scale of 8x8 DCT blocks, insensitivity to different kinds of forgery methods (such as alpha matting and inpainting, in addition to simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experimental results on JPEG images are promising.
A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda. Researchers have explored the benefits and applications of virtual reality (VR) in different scenarios. VR possesses much potential and its application in education has seen much research interest lately. However, little systematic work currently exists on how researchers have applied immersive VR for higher education purposes that considers the usage of both high-end and budget head-mounted displays (HMDs). Hence, we propose using systematic mapping to identify design elements of existing research dedicated to the application of VR in higher education. The reviewed articles were acquired by extracting key information from documents indexed in four scientific digital libraries, which were filtered systematically using exclusion, inclusion, semi-automatic, and manual methods. Our review emphasizes three key points: the current domain structure in terms of the learning contents, the VR design elements, and the learning theories, as a foundation for successful VR-based learning. The mapping was conducted between application domains and learning contents and between design elements and learning contents. Our analysis has uncovered several gaps in the application of VR in the higher education sphere—for instance, learning theories were not often considered in VR application development to assist and guide toward learning outcomes. Furthermore, the evaluation of educational VR applications has primarily focused on usability of the VR apps instead of learning outcomes and immersive VR has mostly been a part of experimental and development work rather than being applied regularly in actual teaching. Nevertheless, VR seems to be a promising sphere as this study identifies 18 application domains, indicating a better reception of this technology in many disciplines. The identified gaps point toward unexplored regions of VR design for education, which could motivate future work in the field.
A ROI-based high capacity reversible data hiding scheme with contrast enhancement for medical images. In this paper, we attempt to investigate the secure archiving of medical images which are stored on semi-trusted cloud servers, and focus on addressing the complicated and challenging integrity control and privacy preservation issues. With the intention of protecting the medical images stored on a semi-trusted server, a novel ROI-based high capacity reversible data hiding (RDH) scheme with contrast enhancement is proposed in this paper. The proposed method aims at improving the quality of the medical images effectively and embedding high capacity data reversibly meanwhile. Therefore, the proposed method adopts “adaptive threshold detector” (ATD) segmentation algorithm to automatically separate the “region of interest” (ROI) and “region of non-interest” (NROI) at first, then enhances the contrast of the ROI region by stretching the grayscale and embeds the data into peak bins of the stretched histogram without extending the histogram bins. Lastly, the rest of the required large of data are embedded into NROI region regardless its quality. In addition, the proposed method records the edge location of the segmentation instead of recording the location of the overflow and underflow. The experiment shows that the proposed method can improve the quality of medical images obviously whatever in low embedding rate or high embedding rate when compared with other contrast-based RDH methods.
A Muscle Synergy-Driven ANFIS Approach to Predict Continuous Knee Joint Movement Continuous motion prediction plays a significant role in realizing seamless control of robotic exoskeletons and orthoses. Explicitly modeling the relationship between coordinated muscle activations from surface electromyography (sEMG) and human limb movements provides a new path of sEMG-based human–machine interface. Instead of the numeric features from individual channels, we propose a muscle synergy-driven adaptive network-based fuzzy inference system (ANFIS) approach to predict continuous knee joint movements, in which muscle synergy reflects the motor control information to coordinate muscle activations for performing movements. Four human subjects participated in the experiment while walking at five types of speed: 2.0 km/h, 2.5 km/h, 3.0 km/h, 3.5 km/h, and 4.0 km/h. The study finds that the acquired muscle synergies associate the muscle activations with human joint movements in a low-dimensional space and have been further utilized for predicting knee joint angles. The proposed approach outperformed commonly used numeric features from individual sEMG channels with an average correlation coefficient of 0.92 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$ \pm $</tex-math></inline-formula> 0.05. Results suggest that the correlation between muscle activations and knee joint movements is captured by the muscle synergy-driven ANFIS model and can be utilized for the estimation of continuous joint angles.
1.045556
0.035556
0.035556
0.033333
0.033333
0.033333
0.013519
0.007556
0.000773
0
0
0
0
0
Rich Models for Steganalysis of Digital Images We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary $\\pm {\\hbox{1}}$ embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.
A deep learning approach to patch-based image inpainting forensics. Although image inpainting is now an effective image editing technique, limited work has been done for inpainting forensics. The main drawbacks of the conventional inpainting forensics methods lie in the difficulties on inpainting feature extraction and the very high computational cost. In this paper, we propose a novel approach based on a convolutional neural network (CNN) to detect patch-based inpainting operation. Specifically, the CNN is built following the encoder–decoder network structure, which allows us to predict the inpainting probability for each pixel in an image. To guide the CNN to automatically learn the inpainting features, a label matrix is generated for the CNN training by assigning a class label for each pixel of an image, and the designed weighted cross-entropy serves as the loss function. They further help to strongly supervise the CNN to capture the manipulation information rather than the image content features. By the established CNN, inpainting forensics does not need to consider feature extraction and classifier design, and use any postprocessing as in conventional forensics methods. They are combined into the unique framework and optimized simultaneously. Experimental results show that the proposed method achieves superior performance in terms of true positive rate, false positive rate and the running time, as compared with state-of-the-art methods for inpainting forensics, and is very robust against JPEG compression and scaling manipulations.
Deep Matching and Validation Network - An End-to-End Solution to Constrained Image Splicing Localization and Detection. Image splicing is a very common image manipulation technique that is sometimes used for malicious purposes. A splicing detection and localization algorithm usually takes an input image and produces a binary decision indicating whether the input image has been manipulated, and also a segmentation mask that corresponds to the spliced region. Most existing splicing detection and localization pipelines suffer from two main shortcomings: 1) they use handcrafted features that are not robust against subsequent processing (e.g., compression), and 2) each stage of the pipeline is usually optimized independently. In this paper we extend the formulation of the underlying splicing problem to consider two input images, a query image and a potential donor image. Here the task is to estimate the probability that the donor image has been used to splice the query image, and obtain the splicing masks for both the query and donor images. We introduce a novel deep convolutional neural network architecture, called Deep Matching and Validation Network (DMVN), which simultaneously localizes and detects image splicing. The proposed approach does not depend on handcrafted features and uses raw input images to create deep learned representations. Furthermore, the DMVN is end-to-end optimized to produce the probability estimates and the segmentation masks. Our extensive experiments demonstrate that this approach outperforms state-of-the-art splicing detection methods by a large margin in terms of both AUC score and speed.
Movie2Comics: Towards a Lively Video Content Presentation a type of artwork, comics is prevalent and popular around the world. However, despite the availability of assistive software and tools, the creation of comics is still a labor-intensive and time-consuming process. This paper proposes a scheme that is able to automatically turn a movie clip to comics. Two principles are followed in the scheme: 1) optimizing the information preservation of the movie; and 2) generating outputs following the rules and the styles of comics. The scheme mainly contains three components: script-face mapping, descriptive picture extraction, and cartoonization. The script-face mapping utilizes face tracking and recognition techniques to accomplish the mapping between characters' faces and their scripts. The descriptive picture extraction then generates a sequence of frames for presentation. Finally, the cartoonization is accomplished via three steps: panel scaling, stylization, and comics layout design. Experiments are conducted on a set of movie clips and the results have demonstrated the usefulness and the effectiveness of the scheme.
Deep High-Resolution Representation Learning for Visual Recognition High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions in series (e.g., ResNet, VGGNet), ...
Multi-Task SE-Network for Image Splicing Localization Image splicing can be easily used for illegal activities such as falsifying propaganda for political purposes and reporting false news, which may result in negative impacts on society. Hence, it is highly required to detect spliced images and localize the spliced regions. In this work, we propose a multi-task squeeze and excitation network (SE-Network) for splicing localization. The proposed network consists of two streams, namely label mask stream and edge-guided stream, both of which adopt convolutional encoder-decoder architecture. The information from the edge-guided stream is transmitted to the label mask stream for enhancing the discrimination of features between the spliced and host regions. This work has three main contributions. First, image edges, along with label masks and mask edges, are exploited to supply more comprehensive supervision for the localization of spliced regions. Second, the low-level feature maps extracted from shallow layers are fused with the high-level feature maps from deep layers to provide more reliable feature for splicing localization. Finally, several squeeze and excitation attention modules are incorporated into the network to recalibrate the fused features to enhance the feature expression. Extensive experiments show that the proposed multi-task SE-Network outperforms existing splicing localization methods evidently on two synthetic splicing datasets and four benchmark splicing datasets.
Blurred Image Splicing Localization by Exposing Blur Type Inconsistency In a tampered blurred image generated by splicing, the spliced region and the original image may have different blur types. Splicing localization in this image is a challenging problem when a forger uses some postprocessing operations as antiforensics to remove the splicing traces anomalies by resizing the tampered image or blurring the spliced region boundary. Such operations remove the artifacts that make detection of splicing difficult. In this paper, we overcome this problem by proposing a novel framework for blurred image splicing localization based on the partial blur type inconsistency. In this framework, after the block-based image partitioning, a local blur type detection feature is extracted from the estimated local blur kernels. The image blocks are classified into out-of-focus or motion blur based on this feature to generate invariant blur type regions. Finally, a fine splicing localization is applied to increase the precision of regions boundary. We can use the blur type differences of the regions to trace the inconsistency for the splicing localization. Our experimental results show the efficiency of the proposed method in the detection and the classification of the out-of-focus and motion blur types. For splicing localization, the result demonstrates that our method works well in detecting the inconsistency in the partial blur types of the tampered images. However, our method can be applied to blurred images only.
Image tamper detection based on demosaicing artifacts In this paper, we introduce tamper detection techniques based on artifacts created by Color Filter Array (CFA) processing in most digital cameras. The techniques are based on computing a single feature and a simple threshold based classifier. The efficacy of the approach was tested over thousands of authentic, tampered, and computer generated images. Experimental results demonstrate reasonably low error rates.
NIPS 2016 Tutorial: Generative Adversarial Networks. This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.
Hierarchical mesh segmentation based on fitting primitives In this paper, we describe a hierarchical face clustering algorithm for triangle meshes based on fitting primitives belonging to an arbitrary set. The method proposed is completely automatic, and generates a binary tree of clusters, each of which is fitted by one of the primitives employed. Initially, each triangle represents a single cluster; at every iteration, all the pairs of adjacent clusters are considered, and the one that can be better approximated by one of the primitives forms a new single cluster. The approximation error is evaluated using the same metric for all the primitives, so that it makes sense to choose which is the most suitable primitive to approximate the set of triangles in a cluster.Based on this approach, we have implemented a prototype that uses planes, spheres and cylinders, and have experimented that for meshes made of 100 K faces, the whole binary tree of clusters can be built in about 8 s on a standard PC.The framework described here has natural application in reverse engineering processes, but it has also been tested for surface denoising, feature recovery and character skinning.
Development and Control of a ‘Soft-Actuated’ Exoskeleton for Use in Physiotherapy and Training Full or partial loss of function in the upper limb is an increasingly common due to sports injuries, occupational injuries, spinal cord injuries, and strokes. Typically treatment for these conditions relies on manipulative physiotherapy procedures which are extremely labour intensive. Although mechanical assistive device exist for limbs this is rare for the upper body.In this paper we describe the construction and testing of a seven degree of motion prototype upper arm training/rehabilitation (exoskeleton) system. The total weight of the uncompensated orthosis is less than 2 kg. This low mass is primarily due to the use of a new range of pneumatic Muscle Actuators (pMA) as power source for the system. This type of actuator, which has also an excellent power/weight ratio, meets the need for safety, simplicity and lightness. The work presented shows how the system takes advantage of the inherent controllable compliance to produce a unit that is extremely powerful, providing a wide range of functionality (motion and forces over an extended range) in a manner that has high safety integrity for the patient. A training control scheme is introduced which is used to control the orthosis when used as exercise facility. Results demonstrate the potential of the device as an upper limb training, rehabilitation and power assist (exoskeleton) system.
Energy-Efficient Optimization for Wireless Information and Power Transfer in Large-Scale MIMO Systems Employing Energy Beamforming In this letter, we consider a large-scale multiple-input multiple-output (MIMO) system where the receiver should harvest energy from the transmitter by wireless power transfer to support its wireless information transmission. The energy beamforming in the large-scale MIMO system is utilized to address the challenging problem of long-distance wireless power transfer. Furthermore, considering the limitation of the power in such a system, this letter focuses on the maximization of the energy efficiency of information transmission (bit per Joule) while satisfying the quality-of-service (QoS) requirement, i.e. delay constraint, by jointly optimizing transfer duration and transmit power. By solving the optimization problem, we derive an energy-efficient resource allocation scheme. Numerical results validate the effectiveness of the proposed scheme.
Orientation-aware RFID tracking with centimeter-level accuracy. RFID tracking attracts a lot of research efforts in recent years. Most of the existing approaches, however, adopt an orientation-oblivious model. When tracking a target whose orientation changes, those approaches suffer from serious accuracy degradation. In order to achieve target tracking with pervasive applicability in various scenarios, we in this paper propose OmniTrack, an orientation-aware RFID tracking approach. Our study discovers the linear relationship between the tag orientation and the phase change of the backscattered signals. Based on this finding, we propose an orientation-aware phase model to explicitly quantify the respective impact of the read-tag distance and the tag's orientation. OmniTrack addresses practical challenges in tracking the location and orientation of a mobile tag. Our experimental results demonstrate that OmniTrack achieves centimeter-level location accuracy and has significant advantages in tracking targets with varing orientations, compared to the state-of-the-art approaches.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.019237
0.016667
0.016667
0.016667
0.016667
0.016667
0.010046
0.004782
0.000034
0
0
0
0
0
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
A sub-constant error-probability low-degree test, and a sub-constant error-probability PCP characterization of NP
Minimum interference routing of bandwidth guaranteed tunnels with MPLS traffic engineering applications This paper presents new algorithms for dynamic routing of bandwidth guaranteed tunnels, where tunnel routing requests arrive one by one and there is no a priori knowledge regarding future requests. This problem is motivated by the service provider needs for fast deployment of bandwidth guaranteed services. Offline routing algorithms cannot be used since they require a priori knowledge of all tunnel requests that are to be rooted. Instead, on-line algorithms that handle requests arriving one by one and that satisfy as many potential future demands as possible are needed. The newly developed algorithms are on-line algorithms and are based on the idea that a newly routed tunnel must follow a route that does not “interfere too much” with a route that may he critical to satisfy a future demand. We show that this problem is NP-hard. We then develop path selection heuristics which are based on the idea of deferred loading of certain “critical” links. These critical links are identified by the algorithm as links that, if heavily loaded, would make it impossible to satisfy future demands between certain ingress-egress pairs. Like min-hop routing, the presented algorithm uses link-state information and some auxiliary capacity information for path selection. Unlike previous algorithms, the proposed algorithm exploits any available knowledge of the network ingress-egress points of potential future demands, even though the demands themselves are unknown. If all nodes are ingress-egress nodes, the algorithm can still be used, particularly to reduce the rejection rate of requests between a specified subset of important ingress-egress pairs. The algorithm performs well in comparison to previously proposed algorithms on several metrics like the number of rejected demands and successful rerouting of demands upon link failure
Pareto-optimal resilient controller placement in SDN-based core networks With the introduction of Software Defined Networking (SDN), the concept of an external and optionally centralized network control plane, i.e. controller, is drawing the attention of researchers and industry. A particularly important task in the SDN context is the placement of such external resources in the network. In this paper, we discuss important aspects of the controller placement problem with a focus on SDN-based core networks, including different types of resilience and failure tolerance. When several performance and resilience metrics are considered, there is usually no single best controller placement solution, but a trade-off between these metrics. We introduce our framework for resilient Pareto-based Optimal COntroller-placement (POCO) that provides the operator of a network with all Pareto-optimal placements. The ideas and mechanisms are illustrated using the Internet2 OS3E topology and further evaluated on more than 140 topologies of the Topology Zoo. In particular, our findings reveal that for most of the topologies more than 20% of all nodes need to be controllers to assure a continuous connection of all nodes to one of the controllers in any arbitrary double link or node failure scenario.
On Placement of Hypervisors and Controllers in Virtualized Software Defined Network. In a virtualized software defined network (VSDN), PACKET_IN messages of switches must pass through the hypervisor in order to reach the corresponding controller. Hence, the latency experienced by a network element is the sum of latency from network element to hypervisor and the latency from the hypervisor to the controller corresponding to the network element. Therefore, the locations of both the ...
Delay-Aware Dynamic Hypervisor Placement and Reconfiguration in Virtualized SDN Software defined networking (SDN) provides different functionality and resource sharing capabilities with the aid of virtualization. In virtualized SDN, multiple SDN tenants can bring their controllers and different functions in the same physical substrate. The SDN hypervisor provides the link between the physical network substrate and its SDN tenants. Distributed hypervisor architecture can handl...
Probabilistic region failure-aware data center network and content placement. Data center network (DCN) and content placement with the consideration of potential large-scale region failure is critical to minimize the DCN loss and disruptions under such catastrophic scenario. This paper considers the optimal placement of DCN and content for DCN failure probability minimization against a region failure. Given a network for DCN placement, a general probabilistic region failure model is adopted to capture the key features of a region failure and to determine the failure probability of a node/link in the network under the region failure. We then propose a general grid partition-based scheme to flexibly define the global nonuniform distribution of potential region failure in terms of its occurring probability and intensity. Such grid partition scheme also helps us to evaluate the vulnerability of a given network under a region failure and thus to create a \"vulnerability map\" for DCN and content placement in the network. With the help of the \"vulnerability map\", we further develop an integer linear program (ILP)-based theoretical framework to identify the optimal placement of DCN and content, which leads to the minimum DCN failure probability against a region failure. A heuristic is also suggested to make the overall placement problem more scalable for large-scale networks. Finally, an example and extensive numerical results are provided to illustrate the proposed DCN and content placement.
Measuring And Mitigating Unintended Bias In Text Classification We introduce and illustrate a new approach to measuring and mitigating unintended bias in machine learning models. Our definition of unintended bias is parameterized by a test set and a subset of input features. We illustrate how this can be used to evaluate text classifiers using a synthetic test set and a public corpus of comments annotated for toxicity from Wikipedia Talk pages. We also demonstrate how imbalances in training data can lead to unintended bias in the resulting models, and therefore potentially unfair applications. We use a set of common demographic identity terms as the subset of input features on which we measure bias. This technique permits analysis in the common scenario where demographic information on authors and readers is unavailable, so that bias mitigation must focus on the content of the text itself. The mitigation method we introduce is an unsupervised approach based on balancing the training dataset. We demonstrate that this approach reduces the unintended bias without compromising overall model quality.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Factorizing personalized Markov chains for next-basket recommendation Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.
A robust adaptive nonlinear control design An adaptive control design procedure for a class of nonlinear systems with both parametric uncertainty and unknown nonlinearities is presented. The unknown nonlinearities lie within some 'bounding functions', which are assumed to be partially known. The key assumption is that the uncertain terms satisfy a 'triangularity condition'. As illustrated by examples, the proposed design procedure expands the class of nonlinear systems for which global adaptive stabilization methods can be applied. The overall adaptive scheme is shown to guarantee global uniform ultimate boundedness.
Internet of Things: A Survey on Enabling Technologies, Protocols and Applications This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Convert Harm Into Benefit: A Coordination-Learning Based Dynamic Spectrum Anti-Jamming Approach This paper mainly investigates the multi-user anti-jamming spectrum access problem. Using the idea of “converting harm into benefit,” the malicious jamming signals projected by the enemy are utilized by the users as the coordination signals to guide spectrum coordination. An “internal coordination-external confrontation” multi-user anti-jamming access game model is constructed, and the existence of Nash equilibrium (NE) as well as correlated equilibrium (CE) is demonstrated. A coordination-learning based anti-jamming spectrum access algorithm (CLASA) is designed to achieve the CE of the game. Simulation results show the convergence, and effectiveness of the proposed CLASA algorithm, and indicate that our approach can help users confront the malicious jammer, and coordinate internal spectrum access simultaneously without information exchange. Last but not least, the fairness of the proposed approach under different jamming attack patterns is analyzed, which illustrates that this approach provides fair anti-jamming spectrum access opportunities under complicated jamming pattern.
1.072709
0.069064
0.066667
0.066667
0.066667
0.066667
0.002667
0.000002
0
0
0
0
0
0
Impact of Data Loss for Prediction of Traffic Flow on an Urban Road Using Neural Networks The deployment of intelligent transport systems requires efficient means of assessing the traffic situation. This involves gathering real traffic data from the road network and predicting the evolution of traffic parameters, in many cases based on incomplete or false data from vehicle detectors. Traffic flows in the network follow spatiotemporal patterns and this characteristic is used to suppress the impact of missing or erroneous data. The application of multilayer perceptrons and deep learning networks using autoencoders for the prediction task is evaluated. Prediction sensitivity to false data is estimated using traffic data from an urban traffic network.
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Dest-ResNet: A Deep Spatiotemporal Residual Network for Hotspot Traffic Speed Prediction. With the ever-increasing urbanization process, the traffic jam has become a common problem in the metropolises around the world, making the traffic speed prediction a crucial and fundamental task. This task is difficult due to the dynamic and intrinsic complexity of the traffic environment in urban cities, yet the emergence of crowd map query data sheds new light on it. In general, a burst of crowd map queries for the same destination in a short duration (called "hotspot'') could lead to traffic congestion. For example, queries of the Capital Gym burst on weekend evenings lead to traffic jams around the gym. However, unleashing the power of crowd map queries is challenging due to the innate spatiotemporal characteristics of the crowd queries. To bridge the gap, this paper firstly discovers hotspots underlying crowd map queries. These discovered hotspots address the spatiotemporal variations. Then Dest-ResNet (Deep spatiotemporal Residual Network) is proposed for hotspot traffic speed prediction. Dest-ResNet is a sequence learning framework that jointly deals with two sequences in different modalities, i.e., the traffic speed sequence and the query sequence. The main idea of Dest-ResNet is to learn to explain and amend the errors caused when the unimodal information is applied individually. In this way, Dest-ResNet addresses the temporal causal correlation between queries and the traffic speed. As a result, Dest-ResNet shows a 30% relative boost over the state-of-the-art methods on real-world datasets from Baidu Map.
Deep Autoencoder Neural Networks for Short-Term Traffic Congestion Prediction of Transportation Networks. Traffic congestion prediction is critical for implementing intelligent transportation systems for improving the efficiency and capacity of transportation networks. However, despite its importance, traffic congestion prediction is severely less investigated compared to traffic flow prediction, which is partially due to the severe lack of large-scale high-quality traffic congestion data and advanced algorithms. This paper proposes an accessible and general workflow to acquire large-scale traffic congestion data and to create traffic congestion datasets based on image analysis. With this workflow we create a dataset named Seattle Area Traffic Congestion Status (SATCS) based on traffic congestion map snapshots from a publicly available online traffic service provider Washington State Department of Transportation. We then propose a deep autoencoder-based neural network model with symmetrical layers for the encoder and the decoder to learn temporal correlations of a transportation network and predicting traffic congestion. Our experimental results on the SATCS dataset show that the proposed DCPN model can efficiently and effectively learn temporal relationships of congestion levels of the transportation network for traffic congestion forecasting. Our method outperforms two other state-of-the-art neural network models in prediction performance, generalization capability, and computation efficiency.
Short-Term Traffic Prediction Based on DeepCluster in Large-Scale Road Networks Short-term traffic prediction (STTP) is one of the most critical capabilities in Intelligent Transportation Systems (ITS), which can be used to support driving decisions, alleviate traffic congestion and improve transportation efficiency. However, STTP of large-scale road networks remains challenging due to the difficulties of effectively modeling the diverse traffic patterns by high-dimensional time series. Therefore, this paper proposes a framework that involves a deep clustering method for STTP in large-scale road networks. The deep clustering method is employed to supervise the representation learning in a visualized way from the large unlabeled dataset. More specifically, to fully exploit the traffic periodicity, the raw series is first divided into a number of sub-series for triplet generation. The convolutional neural networks (CNNs) with triplet loss are utilized to extract the features of shape by transforming the series into visual images. The shape-based representations are then used to cluster road segments into groups. Thereafter, a model sharing strategy is further proposed to build recurrent NNs-based predictions through group-based models (GBMs). GBM is built for a type of traffic patterns, instead of one road segment exclusively or all road segments uniformly. Our framework can not only significantly reduce the number of prediction models, but also improve their generalization by virtue of being trained on more diverse examples. Furthermore, the proposed framework over a selected road network in Beijing is evaluated. Experiment results show that the deep clustering method can effectively cluster the road segments and GBM can achieve comparable prediction accuracy against the IBM with less number of prediction models.
Discovering spatio-temporal causal interactions in traffic data streams The detection of outliers in spatio-temporal traffic data is an important research problem in the data mining and knowledge discovery community. However to the best of our knowledge, the discovery of relationships, especially causal interactions, among detected traffic outliers has not been investigated before. In this paper we propose algorithms which construct outlier causality trees based on temporal and spatial properties of detected outliers. Frequent substructures of these causality trees reveal not only recurring interactions among spatio-temporal outliers, but potential flaws in the design of existing traffic networks. The effectiveness and strength of our algorithms are validated by experiments on a very large volume of real taxi trajectories in an urban road network.
A new approach for dynamic fuzzy logic parameter tuning in Ant Colony Optimization and its application in fuzzy control of a mobile robot Central idea is to avoid or slow down full convergence through the dynamic variation of parameters.Performance of different ACO variants was observed to choose one as the basis to the proposed approach.Convergence fuzzy controller with the objective of maintaining diversity to avoid premature convergence was created. Ant Colony Optimization is a population-based meta-heuristic that exploits a form of past performance memory that is inspired by the foraging behavior of real ants. The behavior of the Ant Colony Optimization algorithm is highly dependent on the values defined for its parameters. Adaptation and parameter control are recurring themes in the field of bio-inspired optimization algorithms. The present paper explores a new fuzzy approach for diversity control in Ant Colony Optimization. The main idea is to avoid or slow down full convergence through the dynamic variation of a particular parameter. The performance of different variants of the Ant Colony Optimization algorithm is analyzed to choose one as the basis to the proposed approach. A convergence fuzzy logic controller with the objective of maintaining diversity at some level to avoid premature convergence is created. Encouraging results on several traveling salesman problem instances and its application to the design of fuzzy controllers, in particular the optimization of membership functions for a unicycle mobile robot trajectory control are presented with the proposed method.
Adaptive Navigation Support Adaptive navigation support is a specific group of technologies that support user navigation in hyperspace, by adapting to the goals, preferences and knowledge of the individual user. These technologies, originally developed in the field of adaptive hypermedia, are becoming increasingly important in several adaptive Web applications, ranging from Web-based adaptive hypermedia to adaptive virtual reality. This chapter provides a brief introduction to adaptive navigation support, reviews major adaptive navigation support technologies and mechanisms, and illustrates these with a range of examples.
Learning to Predict Driver Route and Destination Intent For many people, driving is a routine activity where people drive to the same destinations using the same routes on a regular basis. Many drivers, for example, will drive to and from work along a small set of routes, at about the same time every day of the working week. Similarly, although a person may shop on different days or at different times, they will often visit the same grocery store(s). In this paper, we present a novel approach to predicting driver intent that exploits the predictable nature of everyday driving. Our approach predicts a driver's intended route and destination through the use of a probabilistic model learned from observation of their driving habits. We show that by using a low-cost GPS sensor and a map database, it is possible to build a hidden Markov model (HMM) of the routes and destinations used by the driver. Furthermore, we show that this model can be used to make accurate predictions of the driver's destination and route through on-line observation of their GPS position during the trip. We present a thorough evaluation of our approach using a corpus of almost a month of real, everyday driving. Our results demonstrate the effectiveness of the approach, achieving approximately 98% accuracy in most cases. Such high performance suggests that the method can be harnessed for improved safety monitoring, route planning taking into account traffic density, and better trip duration prediction
A Minimal Set Of Coordinates For Describing Humanoid Shoulder Motion The kinematics of the anatomical shoulder are analysed and modelled as a parallel mechanism similar to a Stewart platform. A new method is proposed to describe the shoulder kinematics with minimal coordinates and solve the indeterminacy. The minimal coordinates are defined from bony landmarks and the scapulothoracic kinematic constraints. Independent from one another, they uniquely characterise the shoulder motion. A humanoid mechanism is then proposed with identical kinematic properties. It is then shown how minimal coordinates can be obtained for this mechanism and how the coordinates simplify both the motion-planning task and trajectory-tracking control. Lastly, the coordinates are also shown to have an application in the field of biomechanics where they can be used to model the scapulohumeral rhythm.
Massive MIMO Antenna Selection: Switching Architectures, Capacity Bounds, and Optimal Antenna Selection Algorithms. Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, w...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
Investigating learning outcomes and subjective experiences in 360-degree videos. Virtual Reality experiences, particularly the 360-degree video, have become popular in recent years for creating immersive educational experiences. However, much is still unknown regarding the educational effectiveness of this medium. Here we examined pre-to-post changes in well-being, simulator sickness, and learning outcomes across four devices of varying levels of immersion: a smartphone, Google Cardboard, Oculus Rift DK2, and Oculus CV1 using a space-themed 360° educational video. More immersive devices induced greater induction of place illusion, greater positive affect, and better learning outcomes while demonstrating low prevalence of simulator sickness. Greater immersion was also associated with an increased interest in learning more about the video's subject-matter. On the other hand, less immersive technology led to increased simulation sickness which may have led to suboptimal educational experiences. Overall, we found support for the hypothesis that highly immersive experiences using 360° videos provide positive educational experiences while minimizing simulator sickness.
Game immersion experience: its hierarchical structure and impact on game-based science learning. Many studies have shown the positive impact of serious educational games SEGs on learning outcomes. However, there still exists insufficient research that delves into the impact of immersive experience in the process of gaming on SEG-based science learning. The dual purpose of this study was to further explore this impact. One purpose was to develop and validate an innovative measurement, the Game Immersion Questionnaire GIQ, and to further verify the hierarchical structure of game immersion by construct validity approaches, including exploratory factor analysis EFAn=257 and confirmatory factor analysis CFAn=1044. The second purpose was to investigate the impact of game immersion on science learning through SEG play n=260. Overall, the results supported the internal structure of the GIQ with good reliability and validity, and the inter factor bivariate correlations for each construct indicated a high internal consistency. Players did learn from playing an SEG, and game immersion experience did lead to higher gaming performance. Moreover, players' gaming performance plays a role in mediating the effect of immersion on science learning outcomes through SEG play. However, as players became more emotionally and subjectively attached to the game, the science learning outcomes were not definitively reliable.
A Comparative Study of Distributed Learning Environments on Learning Outcomes Advances in information and communication technologies have fueled rapid growth in the popularity of technology-supported distributed learning (DL). Many educational institutions, both academic and corporate, have undertaken initiatives that leverage the myriad of available DL technologies. Despite their rapid growth in popularity, however, alternative technologies for DL are seldom systematically evaluated for learning efficacy. Considering the increasing range of information and communication technologies available for the development of DL environments, we believe it is paramount for studies to compare the relative learning outcomes of various technologies.In this research, we employed a quasi-experimental field study approach to investigate the relative learning effectiveness of two collaborative DL environments in the context of an executive development program. We also adopted a framework of hierarchical characteristics of group support system (GSS) technologies, outlined by DeSanctis and Gallupe (1987), as the basis for characterizing the two DL environments.One DL environment employed a simple e-mail and listserv capability while the other used a sophisticated GSS (herein referred to as Beta system). Interestingly, the learning outcome of the e-mail environment was higher than the learning outcome of the more sophisticated GSS environment. The post-hoc analysis of the electronic messages indicated that the students in groups using the e-mail system exchanged a higher percentage of messages related to the learning task. The Beta system users exchanged a higher level of technology sense-making messages. No significant difference was observed in the students' satisfaction with the learning process under the two DL environments.
The effects of instructional support and learner interests when learning using computer simulations Within the scope of this study, the effectiveness of two kinds of instructional support was evaluated with regard to the learner's interests. Two versions of a simulation program about the respiratory chain were developed, differing only in the kind of tasks provided for instructional support: One version contained problem-solving tasks, the other one contained worked-out examples. The focus was on the learner's interest in the subject and in computers. The first goal of the study was to find to what extent computer simulations incorporating the different kinds of instructional support have positive effects on situational subject-interest. The second goal was to evaluate the interactions between the learner's interests and the instructional support with regard to the learning results (subdivided into factual knowledge and understanding). Simulations with worked-out examples were shown to have positive effects on the learner's situational interest in the subject. This was not found to be the case in simulations with problem-solving tasks. Regardless of the kind of instructional support, learners with little interest in the subject were able to achieve significant gains in factual knowledge. However, improvement in understanding was dependent on the kind of instructional support.
Creating 360° educational video: a case study. The application of virtual reality (VR) to education has been documented for over half a century. During this time studies investigating its use have demonstrated positive findings ranging from increased time on task, to enjoyment, motivation and retention. Despite this, VR systems have never achieved widespread adoption in education. This is arguably due to both limitations of the VR technologies themselves, and the overhead incurred by both content developers and users. In this paper we describe a case study of an alternative approach to creating educational VR content. Instead of using computer graphics, we used a spherical camera in conjunction with a VR head-mounted display to provide 360° educational lectures. The content creation process, as well as issues we encountered during this study are explained, before we conclude by discussing the viability of this approach.
A gender matching effect in learning with pedagogical agents in an immersive virtual reality science simulation The main objective of this study is to determine whether boys and girls learn better when the characteristics of the pedagogical agent are matched to the gender of the learner while learning in immersive virtual reality (VR). Sixty-six middle school students (33 females) were randomly assigned to learn about laboratory safety with one of two pedagogical agents: Marie or a drone, who we predicted serve as a role models for females and males, respectively. The results indicated that there were significant interactions for the dependent variables of performance during learning, retention, and transfer, with girls performing better with Marie (d = 0.98, d = 0.67, and d = 1.03; for performance, retention, and transfer, respectively) and boys performing better with the drone (d = -0.41, d = -0.45, d = -0.23, respectively). The results suggest that gender-specific design of pedagogical agents may play an important role in VR learning environments.
A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda. Researchers have explored the benefits and applications of virtual reality (VR) in different scenarios. VR possesses much potential and its application in education has seen much research interest lately. However, little systematic work currently exists on how researchers have applied immersive VR for higher education purposes that considers the usage of both high-end and budget head-mounted displays (HMDs). Hence, we propose using systematic mapping to identify design elements of existing research dedicated to the application of VR in higher education. The reviewed articles were acquired by extracting key information from documents indexed in four scientific digital libraries, which were filtered systematically using exclusion, inclusion, semi-automatic, and manual methods. Our review emphasizes three key points: the current domain structure in terms of the learning contents, the VR design elements, and the learning theories, as a foundation for successful VR-based learning. The mapping was conducted between application domains and learning contents and between design elements and learning contents. Our analysis has uncovered several gaps in the application of VR in the higher education sphere—for instance, learning theories were not often considered in VR application development to assist and guide toward learning outcomes. Furthermore, the evaluation of educational VR applications has primarily focused on usability of the VR apps instead of learning outcomes and immersive VR has mostly been a part of experimental and development work rather than being applied regularly in actual teaching. Nevertheless, VR seems to be a promising sphere as this study identifies 18 application domains, indicating a better reception of this technology in many disciplines. The identified gaps point toward unexplored regions of VR design for education, which could motivate future work in the field.
A Tutorial On Visual Servo Control This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Energy-Optimized Partial Computation Offloading in Mobile-Edge Computing With Genetic Simulated-Annealing-Based Particle Swarm Optimization Smart mobile devices (SMDs) can meet users' high expectations by executing computational intensive applications but they only have limited resources, including CPU, memory, battery power, and wireless medium. To tackle this limitation, partial computation offloading can be used as a promising method to schedule some tasks of applications from resource-limited SMDs to high-performance edge servers. However, it brings communication overhead issues caused by limited bandwidth and inevitably increases the latency of tasks offloaded to edge servers. Therefore, it is highly challenging to achieve a balance between high-resource consumption in SMDs and high communication cost for providing energy-efficient and latency-low services to users. This work proposes a partial computation offloading method to minimize the total energy consumed by SMDs and edge servers by jointly optimizing the offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels, and transmission power of each SMD in each time slot. It jointly considers the execution time of tasks performed in SMDs and edge servers, and transmission time of data. It also jointly considers latency limits, CPU speeds, transmission power limits, available energy of SMDs, and the maximum number of CPU cycles and memories in edge servers. Considering these factors, a nonlinear constrained optimization problem is formulated and solved by a novel hybrid metaheuristic algorithm named genetic simulated annealing-based particle swarm optimization (GSP) to produce a close-to-optimal solution. GSP achieves joint optimization of computation offloading between a cloud data center and the edge, and resource allocation in the data center. Real-life data-based experimental results prove that it achieves lower energy consumption in less convergence time than its three typical peers.
A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots Autonomous mobile robots navigating in changing and dynamic unstructured environments like the outdoor environments need to cope with large amounts of uncertainties that are inherent of natural environments. The traditional type-1 fuzzy logic controller (FLC) using precise type-1 fuzzy sets cannot fully handle such uncertainties. A type-2 FLC using type-2 fuzzy sets can handle such uncertainties to produce a better performance. In this paper, we present a novel reactive control architecture for autonomous mobile robots that is based on type-2 FLC to implement the basic navigation behaviors and the coordination between these behaviors to produce a type-2 hierarchical FLC. In our experiments, we implemented this type-2 architecture in different types of mobile robots navigating in indoor and outdoor unstructured and challenging environments. The type-2-based control system dealt with the uncertainties facing mobile robots in unstructured environments and resulted in a very good performance that outperformed the type-1-based control system while achieving a significant rule reduction compared to the type-1 system.
IntrospectiveViews: an interface for scrutinizing semantic user models User models are a key component for user-adaptive systems They represent information about users such as interests, expertise, goals, traits, etc This information is used to achieve various adaptation effects, e.g., recommending relevant documents or products To ensure acceptance by users, these models need to be scrutable, i.e., users must be able to view and alter them to understand and if necessary correct the assumptions the system makes about the user However, in most existing systems, this goal is not met In this paper, we introduce IntrospectiveViews, an interface that enables the user to view and edit her user model Furthermore, we present the results of a formative evaluation that show the importance users give in general to different aspects of scrutable user models and also substantiate our claim that IntrospectiveViews is an appropriate realization of an interface to such models.
Placing Virtual Machines to Optimize Cloud Gaming Experience Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider's total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Gender Bias in Coreference Resolution. We present an empirical study of gender bias in coreference resolution systems. We first introduce a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. With these Winogender schemas, we evaluate and confirm systematic gender bias in three publicly-available coreference resolution systems, and correlate this bias with real-world and textual gender statistics.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
Flight Delay Prediction Based on Aviation Big Data and Machine Learning Accurate flight delay prediction is fundamental to establish the more efficient airline business. Recent studies have been focused on applying machine learning methods to predict the flight delay. Most of the previous prediction methods are conducted in a single route or airport. This paper explores a broader scope of factors which may potentially influence the flight delay, and compares several machine learning-based models in designed generalized flight delay prediction tasks. To build a dataset for the proposed scheme, automatic dependent surveillance-broadcast (ADS-B) messages are received, pre-processed, and integrated with other information such as weather condition, flight schedule, and airport information. The designed prediction tasks contain different classification tasks and a regression task. Experimental results show that long short-term memory (LSTM) is capable of handling the obtained aviation sequence data, but overfitting problem occurs in our limited dataset. Compared with the previous schemes, the proposed random forest-based model can obtain higher prediction accuracy (90.2% for the binary classification) and can overcome the overfitting problem.
A Private and Efficient Mechanism for Data Uploading in Smart Cyber-Physical Systems. To provide fine-grained access to different dimensions of the physical world, the data uploading in smart cyber-physical systems suffers novel challenges on both energy conservation and privacy preservation. It is always critical for participants to consume as little energy as possible for data uploading. However, simply pursuing energy efficiency may lead to extreme disclosure of private informat...
Exploring Data Validity in Transportation Systems for Smart Cities. Efficient urban transportation systems are widely accepted as essential infrastructure for smart cities, and they can highly increase a city¿s vitality and convenience for residents. The three core pillars of smart cities can be considered to be data mining technology, IoT, and mobile wireless networks. Enormous data from IoT is stimulating our cities to become smarter than ever before. In transportation systems, data-driven management can dramatically enhance the operating efficiency by providing a clear and insightful image of passengers¿ transportation behavior. In this article, we focus on the data validity problem in a cellular network based transportation data collection system from two aspects: internal time discrepancy and data loss. First, the essence of time discrepancy was analyzed for both automated fare collection (AFC) and automated vehicular location (AVL) systems, and it was found that time discrepancies can be identified and rectified by analyzing passenger origin inference success rate using different time shift values and evolutionary algorithms. Second, the algorithmic framework to handle location data loss and time discrepancy was provided. Third, the spatial distribution characteristics of location data loss events were analyzed, and we discovered that they have a strong and positive relationship with both high passenger volume and shadowing effects in urbanized areas, which can cause severe biases on passenger traffic analysis. Our research has proposed some data-driven methodologies to increase data validity and provided some insights into the influence of IoT level data loss on public transportation systems for smart cities.
TGNet: Learning to Rank Nodes in Temporal Graphs. Node ranking in temporal networks are often impacted by heterogeneous context from node content, temporal, and structural dimensions. This paper introduces TGNet , a deep learning framework for node ranking in heterogeneous temporal graphs. TGNet utilizes a variant of Recurrent Neural Network to adapt context evolution and extract context features for nodes. It incorporates a novel influence network to dynamically estimate temporal and structural influence among nodes over time. To cope with label sparsity, it integrates graph smoothness constraints as a weak form of supervision. We show that the application of TGNet is feasible for large-scale networks by developing efficient learning and inference algorithms with optimization techniques. Using real-life data, we experimentally verify the effectiveness and efficiency of TGNet techniques. We also show that TGNet yields intuitive explanations for applications such as alert detection and academic impact ranking, as verified by our case study.
Seed-free Graph De-anonymiztiation with Adversarial Learning The huge amount of graph data are published and shared for research and business purposes, which brings great benefit for our society. However, user privacy is badly undermined even though user identity can be anonymized. Graph de-anonymization to identify nodes from an anonymized graph is widely adopted to evaluate users' privacy risks. Most existing de-anonymization methods which are heavily reliant on side information (e.g., seeds, user profiles, community labels) are unrealistic due to the difficulty of collecting this side information. A few graph de-anonymization methods only using structural information, called seed-free methods, have been proposed recently, which mainly take advantage of the local and manual features of nodes while overlooking the global structural information of the graph for de-anonymization. In this paper, a seed-free graph de-anonymization method is proposed, where a deep neural network is adopted to learn features and an adversarial framework is employed for node matching. To be specific, the latent representation of each node is obtained by graph autoencoder. Furthermore, an adversarial learning model is proposed to transform the embedding of the anonymized graph to the latent space of auxiliary graph embedding such that a linear mapping can be derived from a global perspective. Finally, the most similar node pairs in the latent space as the anchor nodes are utilized to launch propagation to de-anonymize all the remaining nodes. The extensive experiments on some real datasets demonstrate that our method is comparative with the seed-based approaches and outperforms the start-of-the-art seed-free method significantly.
GraphSleepNet: Adaptive Spatial-Temporal Graph Convolutional Networks for Sleep Stage Classification
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
Markov games as a framework for multi-agent reinforcement learning In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
Scalable and efficient provable data possession. Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client's (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.
Cognitive Cars: A New Frontier for ADAS Research This paper provides a survey of recent works on cognitive cars with a focus on driver-oriented intelligent vehicle motion control. The main objective here is to clarify the goals and guidelines for future development in the area of advanced driver-assistance systems (ADASs). Two major research directions are investigated and discussed in detail: 1) stimuli–decisions–actions, which focuses on the driver side, and 2) perception enhancement–action-suggestion–function-delegation, which emphasizes the ADAS side. This paper addresses the important achievements and major difficulties of each direction and discusses how to combine the two directions into a single integrated system to obtain safety and comfort while driving. Other related topics, including driver training and infrastructure design, are also studied.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
In the light of feature distributions: moment matching for Neural Style Transfer Style transfer aims to render the content of a given image in the graphical/artistic style of another image. The fundamental concept underlying Neural Style Transfer (NST) is to interpret style as a distribution in the feature space of a Convolutional Neural Network, such that a desired style can be achieved by matching its feature distribution. We show that most current implementations of that concept have important theoretical and practical limitations, as they only partially align the feature distributions. We propose a novel approach that matches the distributions more precisely, thus reproducing the desired style more faithfully, while still being computationally efficient. Specifically, we adapt the dual form of Central Moment Discrepancy (CMD), as recently proposed for domain adaptation, to minimize the difference between the target style and the feature distribution of the output image. The dual interpretation of this metric explicitly matches all higher-order centralized moments and is therefore a natural extension of existing NST methods that only take into account the first and second moments. Our experiments confirm that the strong theoretical properties also translate to visually better style transfer, and better disentangle style from semantic image content.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Transient attributes for high-level understanding and editing of outdoor scenes We live in a dynamic visual world where the appearance of scenes changes dramatically from hour to hour or season to season. In this work we study \"transient scene attributes\" -- high level properties which affect scene appearance, such as \"snow\", \"autumn\", \"dusk\", \"fog\". We define 40 transient attributes and use crowdsourcing to annotate thousands of images from 101 webcams. We use this \"transient attribute database\" to train regressors that can predict the presence of attributes in novel images. We demonstrate a photo organization method based on predicted attributes. Finally we propose a high-level image editing method which allows a user to adjust the attributes of a scene, e.g. change a scene to be \"snowy\" or \"sunset\". To support attribute manipulation we introduce a novel appearance transfer technique which is simple and fast yet competitive with the state-of-the-art. We show that we can convincingly modify many transient attributes in outdoor scenes.
Semantic Understanding of Scenes through the ADE20K Dataset. Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement state-of-the-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects.
Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures. This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW). Sync-DRAW can also perform text-to-video generation which, to the best of our knowledge, makes it the first approach of its kind. It combines a Variational Autoencoder(VAE) with a Recurrent Attention Mechanism in a novel manner to create a temporally dependent sequence of frames that are gradually formed over time. The recurrent attention mechanism in Sync-DRAW attends to each individual frame of the video in sychronization, while the VAE learns a latent distribution for the entire video at the global level. Our experiments with Bouncing MNIST, KTH and UCF-101 suggest that Sync-DRAW is efficient in learning the spatial and temporal information of the videos and generates frames with high structural integrity, and can generate videos from simple captions on these datasets.
Dynamic Facial Expression Generation on Hilbert Hypersphere With Conditional Wasserstein Generative Adversarial Nets In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, w...
Cross-MPI: Cross-scale Stereo for Image Super-Resolution using Multiplane Images Various combinations of cameras enrich computational photography, among which reference-based superresolution (RefSR) plays a critical role in multiscale imaging systems. However, existing RefSR approaches fail to accomplish high-fidelity super-resolution under a large resolution gap, e.g., 8x upscaling, due to the lower consideration of the underlying scene structure. In this paper, we aim to solve the RefSR problem in actual multiscale camera systems inspired by multiplane image (MPI) representation. Specifically, we propose Cross-MPI, an end-to-end RefSR network composed of a novel plane-aware attention-based MPI mechanism, a multiscale guided upsampling module as well as a super-resolution (SR) synthesis and fusion module. Instead of using a direct and exhaustive matching between the cross-scale stereo, the proposed plane-aware attention mechanism fully utilizes the concealed scene structure for efficient attention-based correspondence searching. Further combined with a gentle coarse-to-fine guided upsampling strategy, the proposed Cross-MPI can achieve a robust and accurate detail transmission. Experimental results on both digitally synthesized and optical zoom cross-scale data show that the Cross-MPI framework can achieve superior performance against the existing RefSR methods and is a real fit for actual multiscale camera systems even with large-scale differences.
End-To-End Time-Lapse Video Synthesis From A Single Outdoor Image Time-lapse videos usually contain visually appealing content but are often difficult and costly to create. In this paper, we present an end-to-end solution to synthesize a time-lapse video from a single outdoor image using deep neural networks. Our key idea is to train a conditional generative adversarial network based on existing datasets of time-lapse videos and image sequences. We propose a multi-frame joint conditional generation framework to effectively learn the correlation between the illumination change of an outdoor scene and the time of the day. We further present a multi-domain training scheme for robust training of our generative models from two datasets with different distributions and missing timestamp labels. Compared to alternative time-lapse video synthesis algorithms, our method uses the timestamp as the control variable and does not require a reference video to guide the synthesis of the final output. We conduct ablation studies to validate our algorithm and compare with state-of-the-art techniques both qualitatively and quantitatively.
Sequence to Sequence Learning with Neural Networks. Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
A General Equilibrium Model for Industries with Price and Service Competition This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers' prices and all retailers' service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty.We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms' service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game.We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service.
Mobile cloud computing: A survey Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for future work.
Eye-vergence visual servoing enhancing Lyapunov-stable trackability Visual servoing methods for hand---eye configuration are vulnerable for hand's dynamical oscillation, since nonlinear dynamical effects of whole manipulator stand against the stable tracking ability (trackability). Our proposal to solve this problem is that the controller for visual servoing of the hand and the one for eye-vergence should be separated independently based on decoupling each other, where the trackability is verified by Lyapunov analysis. Then the effectiveness of the decoupled hand and eye-vergence visual servoing method is evaluated through simulations incorporated with actual dynamics of 7-DoF robot with additional 3-DoF for eye-vergence mechanism by amplitude and phase frequency analysis.
An improved E-DRM scheme for mobile environments. With the rapid development of information science and network technology, Internet has become an important platform for the dissemination of digital content, which can be easily copied and distributed through the Internet. Although convenience is increased, it causes significant damage to authors of digital content. Digital rights management system (DRM system) is an access control system that is designed to protect digital content and ensure illegal users from maliciously spreading digital content. Enterprise Digital Rights Management system (E-DRM system) is a DRM system that prevents unauthorized users from stealing the enterprise's confidential data. User authentication is the most important method to ensure digital rights management. In order to verify the validity of user, the biometrics-based authentication protocol is widely used due to the biological characteristics of each user are unique. By using biometric identification, it can ensure the correctness of user identity. In addition, due to the popularity of mobile device and Internet, user can access digital content and network information at anytime and anywhere. Recently, Mishra et al. proposed an anonymous and secure biometric-based enterprise digital rights management system for mobile environment. Although biometrics-based authentication is used to prevent users from being forged, the anonymity of users and the preservation of digital content are not ensured in their proposed system. Therefore, in this paper, we will propose a more efficient and secure biometric-based enterprise digital rights management system with user anonymity for mobile environments.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
Computation Rate Maximization for Wireless Powered Mobile-Edge Computing with Binary Computation Offloading. Finite battery lifetime and low computing capability of size-constrained wireless devices (WDs) have been longstanding performance limitations of many low-power wireless networks, e.g., wireless sensor networks and Internet of Things. The recent development of radio frequency-based wireless power transfer (WPT) and mobile edge computing (MEC) technologies provide a promising solution to fully remo...
BeCome: Blockchain-Enabled Computation Offloading for IoT in Mobile Edge Computing Benefiting from the real-time processing ability of edge computing, computing tasks requested by smart devices in the Internet of Things are offloaded to edge computing devices (ECDs) for implementation. However, ECDs are often overloaded or underloaded with disproportionate resource requests. In addition, during the process of task offloading, the transmitted information is vulnerable, which can result in data incompleteness. In view of this challenge, a blockchain-enabled computation offloading method, named BeCome, is proposed in this article. Blockchain technology is employed in edge computing to ensure data integrity. Then, the nondominated sorting genetic algorithm III is adopted to generate strategies for balanced resource allocation. Furthermore, simple additive weighting and multicriteria decision making are utilized to identify the optimal offloading strategy. Finally, performance evaluations of BeCome are given through simulation experiments.
Mobile cloud computing [Guest Edotorial] Mobile Cloud Computing refers to an infrastructure where both the data storage and the data processing occur outside of the mobile device. Mobile cloud applications move the computing power and data storage away from mobile devices and into the cloud, bringing applications and mobile computing not only to smartphone users but also to a much broader range of mobile subscribers.
Multiuser Joint Task Offloading and Resource Optimization in Proximate Clouds Proximate cloud computing enables computationally intensive applications on mobile devices, providing a rich user experience. However, remote resource bottlenecks limit the scalability of offloading, requiring optimization of the offloading decision and resource utilization. To this end, in this paper, we leverage the variability in capabilities of mobile devices and user preferences. Our system u...
Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks Mobile edge computing (MEC) is a promising technology to support mission-critical vehicular applications, such as intelligent path planning and safety applications. In this paper, a collaborative edge computing framework is developed to reduce the computing service latency and improve service reliability for vehicular networks. First, a task partition and scheduling algorithm (TPSA) is proposed to decide the workload allocation and schedule the execution order of the tasks offloaded to the edge servers given a computation offloading strategy. Second, an artificial intelligence (AI) based collaborative computing approach is developed to determine the task offloading, computing, and result delivery policy for vehicles. Specifically, the offloading and computing problem is formulated as a Markov decision process. A deep reinforcement learning technique, i.e., deep deterministic policy gradient, is adopted to find the optimal solution in a complex urban transportation network. By our approach, the service cost, which includes computing service latency and service failure penalty, can be minimized via the optimal workload assignment and server selection in collaborative computing. Simulation results show that the proposed AI-based collaborative computing approach can adapt to a highly dynamic environment with outstanding performance.
Latency-Aware Application Module Management for Fog Computing Environments. The fog computing paradigm has drawn significant research interest as it focuses on bringing cloud-based services closer to Internet of Things (IoT) users in an efficient and timely manner. Most of the physical devices in the fog computing environment, commonly named fog nodes, are geographically distributed, resource constrained, and heterogeneous. To fully leverage the capabilities of the fog nodes, large-scale applications that are decomposed into interdependent Application Modules can be deployed in an orderly way over the nodes based on their latency sensitivity. In this article, we propose a latency-aware Application Module management policy for the fog environment that meets the diverse service delivery latency and amount of data signals to be processed in per unit of time for different applications. The policy aims to ensure applications’ Quality of Service (QoS) in satisfying service delivery deadlines and to optimize resource usage in the fog environment. We model and evaluate our proposed policy in an iFogSim-simulated fog environment. Results of the simulation studies demonstrate significant improvement in performance over alternative latency-aware strategies.
Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.
Crowd sensing of traffic anomalies based on human mobility and social media The advances in mobile computing and social networking services enable people to probe the dynamics of a city. In this paper, we address the problem of detecting and describing traffic anomalies using crowd sensing with two forms of data, human mobility and social media. Traffic anomalies are caused by accidents, control, protests, sport events, celebrations, disasters and other events. Unlike existing traffic-anomaly-detection methods, we identify anomalies according to drivers' routing behavior on an urban road network. Here, a detected anomaly is represented by a sub-graph of a road network where drivers' routing behaviors significantly differ from their original patterns. We then try to describe the detected anomaly by mining representative terms from the social media that people posted when the anomaly happened. The system for detecting such traffic anomalies can benefit both drivers and transportation authorities, e.g., by notifying drivers approaching an anomaly and suggesting alternative routes, as well as supporting traffic jam diagnosis and dispersal. We evaluate our system with a GPS trajectory dataset generated by over 30,000 taxicabs over a period of 3 months in Beijing, and a dataset of tweets collected from WeiBo, a Twitter-like social site in China. The results demonstrate the effectiveness and efficiency of our system.
Untangling Blockchain: A Data Processing View of Blockchain Systems. Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core ...
Multi-column Deep Neural Networks for Image Classification Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.
State resetting for bumpless switching in supervisory control In this paper the realization and implementation of a multi-controller scheme made of a finite set of linear single-input-single-output controllers, possibly having different state dimensions, is studied. The supervisory control framework is considered, namely a minimal parameter dependent realization of the set of controllers such that all controllers share the same state space is used. A specific state resetting strategy based on the behavioral approach to system theory is developed in order to master the transient upon controller switching.
NETWRAP: An NDN Based Real-TimeWireless Recharging Framework for Wireless Sensor Networks Using vehicles equipped with wireless energy transmission technology to recharge sensor nodes over the air is a game-changer for traditional wireless sensor networks. The recharging policy regarding when to recharge which sensor nodes critically impacts the network performance. So far only a few works have studied such recharging policy for the case of using a single vehicle. In this paper, we propose NETWRAP, an N DN based Real Time Wireless Rech arging Protocol for dynamic wireless recharging in sensor networks. The real-time recharging framework supports single or multiple mobile vehicles. Employing multiple mobile vehicles provides more scalability and robustness. To efficiently deliver sensor energy status information to vehicles in real-time, we leverage concepts and mechanisms from named data networking (NDN) and design energy monitoring and reporting protocols. We derive theoretical results on the energy neutral condition and the minimum number of mobile vehicles required for perpetual network operations. Then we study how to minimize the total traveling cost of vehicles while guaranteeing all the sensor nodes can be recharged before their batteries deplete. We formulate the recharge optimization problem into a Multiple Traveling Salesman Problem with Deadlines (m-TSP with Deadlines), which is NP-hard. To accommodate the dynamic nature of node energy conditions with low overhead, we present an algorithm that selects the node with the minimum weighted sum of traveling time and residual lifetime. Our scheme not only improves network scalability but also ensures the perpetual operation of networks. Extensive simulation results demonstrate the effectiveness and efficiency of the proposed design. The results also validate the correctness of the theoretical analysis and show significant improvements that cut the number of nonfunctional nodes by half compared to the static scheme while maintaining the network overhead at the same level.
Multiple switching-time-dependent discretized Lyapunov functions/functionals methods for stability analysis of switched time-delay stochastic systems. This paper presents novel approaches for stability analysis of switched linear time-delay stochastic systems under dwell time constraint. Instead of using comparison principle, piecewise switching-time-dependent discretized Lyapunov functions/functionals are introduced to analyze the stability of switched stochastic systems with constant or time-varying delays. These Lyapunov functions/functionals are decreasing during the dwell time and non-increasing at switching instants, which lead to two mode-dependent dwell-time-based delay-independent stability criteria for the switched systems without restricting the stability of the subsystems. Comparison and numerical examples are provided to show the efficiency of the proposed results.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.066667
0.066667
0.066667
0.066667
0.066667
0.033333
0.013333
0
0
0
0
0
0
0
A lightweight soft exosuit for gait assistance In this paper we present a soft lower-extremity robotic exosuit intended to augment normal muscle function in healthy individuals. Compared to previous exoskeletons, the device is ultra-lightweight, resulting in low mechanical impedance and inertia. The exosuit has custom McKibben style pneumatic actuators that can assist the hip, knee and ankle. The actuators attach to the exosuit through a network of soft, inextensible webbing triangulated to attachment points utilizing a novel approach we call the virtual anchor technique. This approach is designed to transfer forces to locations on the body that can best accept load. Pneumatic actuation was chosen for this initial prototype because the McKibben actuators are soft and can be easily driven by an off-board compressor. The exosuit itself (human interface and actuators) had a mass of 3500 g and with peripherals (excluding air supply) is 7144 g. In order to examine the exosuit's performance, a pilot study with one subject was performed which investigated the effect of the ankle plantar-flexion timing on the wearer's hip, knee and ankle joint kinematics and metabolic power when walking. Wearing the suit in a passive unpowered mode had little effect on hip, knee and ankle joint kinematics as compared to baseline walking when not wearing the suit. Engaging the actuators at the ankles at 30% of the gait cycle for 250 ms altered joint kinematics the least and also minimized metabolic power. The subject's average metabolic power was 386.7 W, almost identical to the average power when wearing no suit (381.8 W), and substantially less than walking with the unpowered suit (430.6 W). This preliminary work demonstrates that the exosuit can comfortably transmit joint torques to the user while not restricting mobility and that with further optimization, has the potential to reduce the wearer's metabolic cost during walking.
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Sensing pressure distribution on a lower-limb exoskeleton physical human-machine interface. A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer's skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented.
A soft wearable robotic device for active knee motions using flat pneumatic artificial muscles We present the design of a soft wearable robotic device composed of elastomeric artificial muscle actuators and soft fabric sleeves, for active assistance of knee motions. A key feature of the device is the two-dimensional design of the elastomer muscles that not only allows the compactness of the device, but also significantly simplifies the manufacturing process. In addition, the fabric sleeves make the device lightweight and easily wearable. The elastomer muscles were characterized and demonstrated an initial contraction force of 38N and maximum contraction of 18mm with 104kPa input pressure, approximately. Four elastomer muscles were employed for assisted knee extension and flexion. The robotic device was tested on a 3D printed leg model with an articulated knee joint. Experiments were conducted to examine the relation between systematic change in air pressure and knee extension-flexion. The results showed maximum extension and flexion angles of 95° and 37°, respectively. However, these angles are highly dependent on underlying leg mechanics and positions. The device was also able to generate maximum extension and flexion forces of 3.5N and 7N, respectively.
Robotic Artificial Muscles: Current Progress and Future Perspectives Robotic artificial muscles are a subset of artificial muscles that are capable of producing biologically inspired motions useful for robot systems, i.e., large power-to-weight ratios, inherent compliance, and large range of motions. These actuators, ranging from shape memory alloys to dielectric elastomers, are increasingly popular for biomimetic robots as they may operate without using complex linkage designs or other cumbersome mechanisms. Recent achievements in fabrication, modeling, and control methods have significantly contributed to their potential utilization in a wide range of applications. However, no survey paper has gone into depth regarding considerations pertaining to their selection, design, and usage in generating biomimetic motions. In this paper, we discuss important characteristics and considerations in the selection, design, and implementation of various prominent and unique robotic artificial muscles for biomimetic robots, and provide perspectives on next-generation muscle-powered robots.
Improving the energy economy of human running with powered and unpowered ankle exoskeleton assistance. Exoskeletons that reduce energetic cost could make recreational running more enjoyable and improve running performance. Although there are many ways to assist runners, the best approaches remain unclear. In our study, we used a tethered ankle exoskeleton emulator to optimize both powered and spring-like exoskeleton characteristics while participants ran on a treadmill. We expected powered conditions to provide large improvements in energy economy and for spring-like patterns to provide smaller benefits achievable with simpler devices. We used human-in-the-loop optimization to attempt to identify the best exoskeleton characteristics for each device type and individual user, allowing for a well-controlled comparison. We found that optimized powered assistance improved energy economy by 24.7 +/- 6.9% compared with zero torque and 14.6 +/- 7.7% compared with running in normal shoes. Optimized powered torque patterns for individuals varied substantially, but all resulted in relatively high mechanical work input (0.36 +/- 0.09 joule kilogram(-1) per step) and late timing of peak torque (75.7 +/- 5.0% stance). Unexpectedly, spring-like assistance was ineffective, improving energy economy by only 2.1 +/- 2.4% compared with zero torque and increasing metabolic rate by 11.1 +/- 2.8% compared with control shoes. The energy savings we observed imply that running velocity could be increased by as much as 10% with no added effort for the user and could influence the design of future products.
Power assist method for HAL-3 using EMG-based feedback controller We have developed the exoskeletal robotics suite HAL (Hybrid Assistive Leg) which is integrated with human and assists suitable power for lower limb of people with gait disorder. This study proposes the method of assist motion and assist torque to realize a power assist corresponding to the operator's intention. In the method of assist motion, we adopted Phase Sequence control which generates a series of assist motions by transiting some simple basic motions called Phase. We used the feedback controller to adjust the assist torque to maintain myoelectricity signals which were generated while performing the power assist walking. The experiment results showed the effective power assist according to operator's intention by using these control, methods.
Simple model of human arm reachable workspace The paper introduces a simplified mathematical model of the human arm kinematics which is used to determine the workspace related to the reachability of the wrist. The model contains six revolute degrees of freedom, five in the shoulder complex and one in the elbow joint. It is not directly associated to the anatomical structure of the arm, but represents the spatial motion of two characteristic points, epicondylus lateralis and proc. styloideus. Use of this simplified model for the determination of reachable workspace offers several advantages versus direct measurement: (i) the workspace can be obtained in few minutes on a micro VAX II computer, (ii) patients with various injuries in various stages of recovery can be treated since only a few brief and simple measurements of the model's parameters are needed, and (iii) the calculated workspace includes complete information of the envelope, as well as inside characteristics
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
Metaheuristics in combinatorial optimization: Overview and conceptual comparison The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the two forces that largely determine the behavior of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization.
An efficient ear recognition technique invariant to illumination and pose This paper presents an efficient ear recognition technique which derives benefits from the local features of the ear and attempt to handle the problems due to pose, poor contrast, change in illumination and lack of registration. It uses (1) three image enhancement techniques in parallel to neutralize the effect of poor contrast, noise and illumination, (2) a local feature extraction technique (SURF) on enhanced images to minimize the effect of pose variations and poor image registration. SURF feature extraction is carried out on enhanced images to obtain three sets of local features, one for each enhanced image. Three nearest neighbor classifiers are trained on these three sets of features. Matching scores generated by all three classifiers are fused for final decision. The technique has been evaluated on two public databases, namely IIT Kanpur ear database and University of Notre Dame ear database (Collections E). Experimental results confirm that the use of proposed fusion significantly improves the recognition accuracy.
Solving the data sparsity problem in destination prediction Destination prediction is an essential task for many emerging location-based applications such as recommending sightseeing places and targeted advertising according to destinations. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, almost all the existing techniques use various kinds of extra information such as road network, proprietary travel planner, statistics requested from government, and personal driving habits. Such extra information, in most circumstances, is unavailable or very costly to obtain. Thereby we approach the task of destination prediction by using only historical trajectory dataset. However, this approach encounters the \"data sparsity problem\", i.e., the available historical trajectories are far from enough to cover all possible query trajectories, which considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) to address the data sparsity problem. SubSyn first decomposes historical trajectories into sub-trajectories comprising two adjacent locations, and then connects the sub-trajectories into \"synthesised\" trajectories. This process effectively expands the historical trajectory dataset to contain much more trajectories. Experiments based on real datasets show that SubSyn can predict destinations for up to ten times more query trajectories than a baseline prediction algorithm. Furthermore, the running time of the SubSyn-training algorithm is almost negligible for a large set of 1.9 million trajectories, and the SubSyn-prediction algorithm runs over two orders of magnitude faster than the baseline prediction algorithm constantly.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.03772
0.035642
0.035642
0.035642
0.035642
0.035642
0.00893
0
0
0
0
0
0
0
A comprehensive survey on trajectory schemes for data collection using mobile elements in WSNs Mobile elements trajectory optimization is one of the most important and efficient ways to enhance the performance of wireless sensor networks (WSNs). In the last 15 years, extensive research has been done in this area, but less effort has been devoted to providing a concise review of the broader area. This article surveys the role of mobile elements trajectory optimization in the performance improvement of WSNs. The complete survey has been done based on three major aspects: applications, trajectory techniques and domains used to formulate the trajectory. Under these three aspects, large numbers of schemes are discussed, along with their sub-aspects. A comparative analysis using eight important parameters, like trajectory pattern, number of mobile elements, speed, mobile element type, etc., is presented in a chronological fashion. The paper also points out the merits and demerits of each scheme described. Based on the current research, we have identified some research domains in this area that need more attention and further exploration.
On the History of the Minimum Spanning Tree Problem It is standard practice among authors discussing the minimum spanning tree problem to refer to the work of Kruskal(1956) and Prim (1957) as the sources of the problem and its first efficient solutions, despite the citation by both of Boruvka (1926) as a predecessor. In fact, there are several apparently independent sources and algorithmic solutions of the problem. They have appeared in Czechoslovakia, France, and Poland, going back to the beginning of this century. We shall explore and compare these works and their motivations, and relate them to the most recent advances on the minimum spanning tree problem.
Smart home energy management system using IEEE 802.15.4 and zigbee Wireless personal area network and wireless sensor networks are rapidly gaining popularity, and the IEEE 802.15 Wireless Personal Area Working Group has defined no less than different standards so as to cater to the requirements of different applications. The ubiquitous home network has gained widespread attentions due to its seamless integration into everyday life. This innovative system transparently unifies various home appliances, smart sensors and energy technologies. The smart energy market requires two types of ZigBee networks for device control and energy management. Today, organizations use IEEE 802.15.4 and ZigBee to effectively deliver solutions for a variety of areas including consumer electronic device control, energy management and efficiency, home and commercial building automation as well as industrial plant management. We present the design of a multi-sensing, heating and airconditioning system and actuation application - the home users: a sensor network-based smart light control system for smart home and energy control production. This paper designs smart home device descriptions and standard practices for demand response and load management "Smart Energy" applications needed in a smart energy based residential or light commercial environment. The control application domains included in this initial version are sensing device control, pricing and demand response and load control applications. This paper introduces smart home interfaces and device definitions to allow interoperability among ZigBee devices produced by various manufacturers of electrical equipment, meters, and smart energy enabling products. We introduced the proposed home energy control systems design that provides intelligent services for users and we demonstrate its implementation using a real testbad.
Bee life-based multi constraints multicast routing optimization for vehicular ad hoc networks. A vehicular ad hoc network (VANET) is a subclass of mobile ad hoc networks, considered as one of the most important approach of intelligent transportation systems (ITS). It allows inter-vehicle communication in which their movement is restricted by a VANET mobility model and supported by some roadside base stations as fixed infrastructures. Multicasting provides different traffic information to a limited number of vehicle drivers by a parallel transmission. However, it represents a very important challenge in the application of vehicular ad hoc networks especially, in the case of the network scalability. In the applications of this sensitive field, it is very essential to transmit correct data anywhere and at any time. Consequently, the VANET routing protocols should be adapted appropriately and meet effectively the quality of service (QoS) requirements in an optimized multicast routing. In this paper, we propose a novel bee colony optimization algorithm called bees life algorithm (BLA) applied to solve the quality of service multicast routing problem (QoS-MRP) for vehicular ad hoc networks as NP-Complete problem with multiple constraints. It is considered as swarm-based algorithm which imitates closely the life of the colony. It follows the two important behaviors in the nature of bees which are the reproduction and the food foraging. BLA is applied to solve QoS-MRP with four objectives which are cost, delay, jitter, and bandwidth. It is also submitted to three constraints which are maximum allowed delay, maximum allowed jitter and minimum requested bandwidth. In order to evaluate the performance and the effectiveness of this realized proposal using C++ and integrated at the routing protocol level, a simulation study has been performed using the network simulator (NS2) based on a mobility model of VANET. The comparisons of the experimental results show that the proposed algorithm outperformed in an efficient way genetic algorithm (GA), bees algorithm (BA) and marriage in honey bees optimization (MBO) algorithm as state-of-the-art conventional metaheuristics applied to QoS-MRP problem with the same simulation parameters.
On the Spatiotemporal Traffic Variation in Vehicle Mobility Modeling Several studies have shown the importance of realistic micromobility and macromobility modeling in vehicular ad hoc networks (VANETs). At the macroscopic level, most researchers focus on a detailed and accurate description of road topology. However, a key factor often overlooked is a spatiotemporal configuration of vehicular traffic. This factor greatly influences network topology and topology variations. Indeed, vehicle distribution has high spatial and temporal diversity that depends on the time of the day and place attraction. This diversity impacts the quality of radio links and, thus, network topology. In this paper, we propose a new mobility model for vehicular networks in urban and suburban environments. To reproduce realistic network topology and topological changes, the model uses real static and dynamic data on the environment. The data concern particularly the topographic and socioeconomic characteristics of infrastructures and the spatiotemporal population distribution. We validate our model by comparing the simulation results with real data derived from individual displacement survey. We also present statistics on network topology, which show the interest of taking into account the spatiotemporal mobility variation.
A bio-inspired clustering in mobile adhoc networks for internet of things based on honey bee and genetic algorithm In mobile adhoc networks for internet of things, the size of routing table can be reduced with the help of clustering structure. The dynamic nature of MANETs and its complexity make it a type of network with high topology changes. To reduce the topology maintenance overhead, the cluster based structure may be used. Hence, it is highly desirable to design an algorithm that adopts quickly to topology dynamics and form balanced and stable clusters. In this article, the formulation of clustering problem is carried out initially. Later, an algorithm on the basis of honey bee algorithm, genetic algorithm and tabu search (GBTC) for internet of things is proposed. In this algorithm, the individual (bee) represents a possbile clustering structure and its fitness is evaluated on the basis of its stability and load balancing. A method is presented by merging the properties of honey bee and genetic algorithms to help the population to cope with the topology dynamics and produce top quality solutions that are closely related to each other. The simulation results conducted for validation show that the proposed work forms balance and stable clusters. The simulation results are compared with algorithms that do not consider the dynamic optimization requirements. The GTBC outperform existing algorithms in terms of network lifetime and clustering overhead etc.
Exploitation whale optimization based optimal offloading approach and topology optimization in a mobile ad hoc cloud environment Widespread availability of network technologies, mobile user request is increased day by day. The larger amount of energy utilization and resource sufficiency of cloud computing is to create the maximum capacity of exploration and exploitation as troublesome. In this paper, we proposed the formation of mobile user behavior based topology and its optimization. During the offloading process, the minimization of response time and the energy consumption is the major goal of this paper. The topology node formations are performed via improved text rank algorithm (ITRA) and neural network (NN) classifiers with Euclidian distance. In this paper, we introduced an effective optimization algorithm of the exploitation whale optimization algorithm (EWOA) and it is the combination of differential evaluation (DE) and whale optimization algorithms (WOA). The offloading process of proposed EWOA produces an optimal outcome of minimized energy consumption and response time. The implementation works of the proposed EWOA are carried out in the VMware platform. The performance of the proposed method is evaluated using different size puzzles, face detection applications, and state-of-art methods. Ultimately, our proposed method produces optimal accuracy and convergence speed with the minimized offloading process.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems. The vehicular edge computing system integrates the computing resources of vehicles, and provides computing services for other vehicles and pedestrians with task offloading. However, the vehicular task offloading environment is dynamic and uncertain, with fast varying network topologies, wireless channel states, and computing workloads. These uncertainties bring extra challenges to task offloading. In this paper, we consider the task offloading among vehicles, and propose a solution that enables vehicles to learn the offloading delay performance of their neighboring vehicles while offloading computation tasks. We design an adaptive learning based task offloading (ALTO) algorithm based on the multi-armed bandit theory, in order to minimize the average offloading delay. ALTO works in a distributed manner without requiring frequent state exchange, and is augmented with input-awareness and occurrence-awareness to adapt to the dynamic environment. The proposed algorithm is proved to have a sublinear learning regret. Extensive simulations are carried out under both synthetic scenario and realistic highway scenario, and results illustrate that the proposed algorithm achieves low delay performance, and decreases the average delay up to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$30\%$</tex-math></inline-formula> compared with the existing upper confidence bound based learning algorithm.
Visual cryptography for general access structures A visual cryptography scheme for a set P of n participants is a method of encoding a secret image SI into n shadow images called shares, where each participant in P receives one share. Certain qualified subsets of participants can “visually” recover the secret image, but other, forbidden, sets of participants have no information (in an information-theoretic sense) on SI . A “visual” recovery for a set X ⊆ P consists of xeroxing the shares given to the participants in X onto transparencies, and then stacking them. The participants in a qualified set X will be able to see the secret image without any knowledge of cryptography and without performing any cryptographic computation. In this paper we propose two techniques for constructing visual cryptography schemes for general access structures. We analyze the structure of visual cryptography schemes and we prove bounds on the size of the shares distributed to the participants in the scheme. We provide a novel technique for realizing k out of n threshold visual cryptography schemes. Our construction for k out of n visual cryptography schemes is better with respect to pixel expansion than the one proposed by M. Naor and A. Shamir (Visual cryptography, in “Advances in Cryptology—Eurocrypt '94” CA. De Santis, Ed.), Lecture Notes in Computer Science, Vol. 950, pp. 1–12, Springer-Verlag, Berlin, 1995) and for the case of 2 out of n is the best possible. Finally, we consider graph-based access structures, i.e., access structures in which any qualified set of participants contains at least an edge of a given graph whose vertices represent the participants of the scheme.
Secure and privacy preserving keyword searching for cloud storage services Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) fool pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
A Passive Blind Approach for Image Splicing Detection Based on DWT and LBP Histograms. Splicing is the most generic kind of forgery found in digital images. Blind detection of such operations has become significant in determining the integrity of digital content. The current paper proposes a passive-blind technique for detecting image splicing using Discrete Wavelet Transform and histograms of Local Binary Pattern (LBP). Splicing operation introduces sharp transition in the form of lines, edges and corners, which are represented by high frequency components. Wavelet analysis characterizes these short-time transient by measuring local sharpness or smoothness from wavelet coefficients. After first level wavelet decomposition of the image, texture variation is studied along the detailed and approximation coefficients using local binary pattern (LBP), since tampering operations disrupts the textural microstructure of an image. Feature vector is formed by concatenating the LBP histogram from the four wavelet sub bands. The classification accuracy of the algorithm was determined using svm classifier using 10-fold cross validation. The method gives maximum accuracy for the chrominance channel of YCbCr color space, which is weak at hiding tampering traces. It is tested on four different kinds of standard spliced image dataset and its performance is compared with some of the latest methods. The method offers accuracy up to 97 % for JPEG images present in the spliced image dataset.
An Effective Method for Detecting Double JPEG Compression With the Same Quantization Matrix Detection of double JPEG compression plays an important role in digital image forensics. Some successful approaches have been proposed to detect double JPEG compression when the primary and secondary compressions have different quantization matrices. However, detecting double JPEG compression with the same quantization matrix is still a challenging problem. In this paper, an effective error-based statistical feature extraction scheme is presented to solve this problem. First, a given JPEG file is decompressed to form a reconstructed image. An error image is obtained by computing the differences between the inverse discrete cosine transform coefficients and pixel values in the reconstructed image. Two classes of blocks in the error image, namely, rounding error block and truncation error block, are analyzed. Then, a set of features is proposed to characterize the statistical differences of the error blocks between single and double JPEG compressions. Finally, the support vector machine classifier is employed to identify whether a given JPEG image is doubly compressed or not. Experimental results on three image databases with various quality factors have demonstrated that the proposed method can significantly outperform the state-of-the-art method.
Combining spatial and DCT based Markov features for enhanced blind detection of image splicing. Nowadays, it is extremely simple to manipulate the content of digital images without leaving perceptual clues due to the availability of powerful image editing tools. Image tampering can easily devastate the credibility of images as a medium for personal authentication and a record of events. With the daily upload of millions of pictures to the Internet and the move towards paperless workplaces and e-government services, it becomes essential to develop automatic tampering detection techniques with reliable results. This paper proposes an enhanced technique for blind detection of image splicing. It extracts and combines Markov features in spatial and Discrete Cosine Transform domains to detect the artifacts introduced by the tampering operation. To reduce the computational complexity due to high dimensionality, Principal Component Analysis is used to select the most relevant features. Then, an optimized support vector machine with radial-basis function kernel is built to classify the image as being tampered or authentic. The proposed technique is evaluated on a publicly available image splicing dataset using cross validation. The results showed that the proposed technique outperforms the state-of-the-art splicing detection methods.
Exposing Splicing Forgery in Realistic Scenes Using Deep Fusion Network Creating fake pictures becomes more accessible than ever, but tampered images are more harmful because the Internet propagates misleading information so rapidly. Reliable digital forensic tools are therefore strongly needed. Traditional methods based on hand-crafted features are only useful when tampered images meet specific requirements, and the low detection accuracy prevents them from using in realistic scenes. Recently proposed learning-based methods improve the accuracy, but neural networks usually require to be trained on large labeled databases. This is because commonly used deep and narrow neural networks extract high-level visual features and neglect low-level features where there are abundant forensic cues. To solve the problem, we propose a novel neural network which concentrates on learning low-level forensic features and consequently can detect splicing forgery although the network is trained on a small automatically generated splicing dataset. Furthermore, our fusion network can be easily extended to support new forensic hypotheses without any changes in the network structure. The experimental results show that our method achieves state-of-the-art performance on several benchmark datasets and shows superior generalization capability: our fusion network can work very well even it never sees any pictures in test databases. Therefore, our method can detect splicing forgery in realistic scenes.
Image splicing detection based on convolutional neural network with weight combination strategy With the rapid development of splicing manipulation, more and more negative effects have been brought. Therefore, the demand for image splicing detection algorithms is growing dramatically. In this paper, a new image splicing detection method is proposed which is based on convolutional neural network (CNN) with weight combination strategy. In the proposed method, three types of features are selected to distinguish splicing manipulation including YCbCr features, edge features and photo response non-uniformity (PRNU) features, which are combined according to weight by the combination strategy. Different from the other methods, these weight parameters are automatically adjusted during the CNN training process, until the best ratio is obtained. Experiments show that the proposed method has higher accuracy than the other methods using CNN, and the depth of the CNN in the method proposed is much less than the compared methods.
Adversarial Learning for Constrained Image Splicing Detection and Localization based on Atrous Convolution Constrained image splicing detection and localization (CISDL), which investigates two input suspected images and identifies whether one image has suspected regions pasted from the other, is a newly proposed challenging task for image forensics. In this paper, we propose a novel adversarial learning framework to learn a deep matching network for CISDL. Our framework mainly consists of three building blocks. First, a deep matching network based on atrous convolution (DMAC) aims to generate two high-quality candidate masks, which indicate suspected regions of the two input images. In DMAC, atrous convolution is adopted to extract features with rich spatial information, a correlation layer based on a skip architecture is proposed to capture hierarchical features, and atrous spatial pyramid pooling is constructed to localize tampered regions at multiple scales. Second, a detection network is designed to rectify inconsistencies between the two corresponding candidate masks. Finally, a discriminative network drives the DMAC network to produce masks that are hard to distinguish from ground-truth ones. The detection network and the discriminative network collaboratively supervise the training of DMAC in an adversarial way. Besides, a sliding window-based matching strategy is investigated for high-resolution images matching. Extensive experiments, conducted on five groups of datasets, demonstrate the effectiveness of the proposed framework and the superior performance of DMAC.
Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
A Privacy-Preserving and Copy-Deterrence Content-Based Image Retrieval Scheme in Cloud Computing. With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.
Reciprocal N-body Collision Avoidance In this paper, we present a formal approach to reciprocal n-body collision avoidance, where multiple mobile robots need to avoid collisions with each other while moving in a common workspace. In our formulation, each robot acts fully in- dependently, and does not communicate with other robots. Based on the definition of velocity obstacles (5), we derive sufficient conditions for collision-free motion by reducing the problem to solving a low-dimensional linear program. We test our approach on several dense and complex simulation scenarios involving thousands of robots and compute collision-free actions for all of them in only a few millisec- onds. To the best of our knowledge, this method is the first that can guarantee local collision-free motion for a large number of robots in a cluttered workspace.
Secure and privacy preserving keyword searching for cloud storage services Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.
Collaborative Mobile Charging The limited battery capacity of sensor nodes has become one of the most critical impediments that stunt the deployment of wireless sensor networks (WSNs). Recent breakthroughs in wireless energy transfer and rechargeable lithium batteries provide a promising alternative to power WSNs: mobile vehicles/robots carrying high volume batteries serve as mobile chargers to periodically deliver energy to sensor nodes. In this paper, we consider how to schedule multiple mobile chargers to optimize energy usage effectiveness, such that every sensor will not run out of energy. We introduce a novel charging paradigm, collaborative mobile charging, where mobile chargers are allowed to intentionally transfer energy between themselves. To provide some intuitive insights into the problem structure, we first consider a scenario that satisfies three conditions, and propose a scheduling algorithm, PushWait, which is proven to be optimal and can cover a one-dimensional WSN of infinite length. Then, we remove the conditions one by one, investigating chargers' scheduling in a series of scenarios ranging from the most restricted one to a general 2D WSN. Through theoretical analysis and simulations, we demonstrate the advantages of the proposed algorithms in energy usage effectiveness and charging coverage.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.24
0.24
0.24
0.24
0.24
0.08
0.02
0
0
0
0
0
0
0
Demand Side Management: Demand Response, Intelligent Energy Systems, and Smart Loads Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain.
Parallel Multi-Block ADMM with o(1/k) Convergence This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: $$\\begin{aligned} \\text {minimize} ~~&f_1(\\mathbf{x}_1) + \\cdots + f_N(\\mathbf{x}_N)\\\\ \\text {subject to}~~&A_1 \\mathbf{x}_1 ~+ \\cdots + A_N\\mathbf{x}_N =c,\\\\&\\mathbf{x}_1\\in {\\mathcal {X}}_1,~\\ldots , ~\\mathbf{x}_N\\in {\\mathcal {X}}_N, \\end{aligned}$$minimizef1(x1)+ź+fN(xN)subject toA1x1+ź+ANxN=c,x1źX1,ź,xNźXN,where $$N \\ge 2$$Nź2, $$f_i$$fi are convex functions, $$A_i$$Ai are matrices, and $${\\mathcal {X}}_i$$Xi are feasible sets for variable $$\\mathbf{x}_i$$xi. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices $$A_i$$Ai are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices $$A_i$$Ai). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that $$\\Vert {\\mathbf {x}}^{k+1} - {\\mathbf {x}}^k\\Vert _M^2$$źxk+1-xkźM2 converges at a rate of o(1 / k) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with 300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.
Design of Modern Supply Chain Networks Using Fuzzy Bargaining Game and Data Envelopment Analysis This article proposes a novel methodology for multistage, multiproduct, multi-item, and closed-loop Supply Chain Network (SCN) design under uncertainty. The method considers that multiple products are manufactured by the SCN, each composed by multiple items, and that some of the sold products may require repair, refurbishing, or remanufacturing activities. We solve the two main decisions that take place in the medium-/short-term planning horizon, namely partners’ selection and allocation of the received orders among them. The partners’ selection problem is solved by a cross-efficiency fuzzy Data Envelopment Analysis technique, which allows evaluating the efficiency of each SCN member and ranking them against multiple conflicting objectives under uncertain data on their performance. Then, according to the estimated customers’ demand, the order allocation problem is solved by a fuzzy bargaining game problem, where each SCN actor behaves to simultaneously maximize both its own profit and the service level of the overall SCN in terms of efficiency, costs, and lead time. An illustrative example from the literature is finally presented. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</italic> —We present a decision tool to address the optimal design, performance evaluation, and continuous improvement of modern cooperative SCNs. We propose an effective method to jointly solve the members’ selection and the orders’ allocation, considering the complex structure of modern SCNs, the multiobjective nature of the problems, and the uncertainty characterizing economic markets. Competition within SCNs stages and cooperation along the chain are considered, with the aim to improve both financial and environmental sustainability, while ensuring the highest service levels to customers.
Distributed Control Of Electric Vehicle Fleets Considering Grid Congestion And Battery Degradation Nowadays, developing coordinated optimal charging strategies for large-scale electric vehicle (EV) fleets is crucial to ensure the reliability and efficiency of power grids. This paper presents a novel fully distributed control strategy for the optimal charging of large-scale EV fleets aiming at the minimization of the aggregated charging cost and battery degradation, while satisfying the EVs' individual load requirements and the overall grid congestion limits. We formulate the optimization problem as a convex quadratic programming problem where all the EVs' decisions are coupled both via the objective function and some grid resource sharing constraints. Based on the distributed waterfilling approach, the proposed resolution algorithm requires a minimal shared information between EVs that communicate only with their neighbors without relying on a central aggregator, thus guaranteeing the EV users' privacy. The performance of the proposed approach is evaluated through numerical experiments to validate its effectiveness in achieving a global optimum while respecting the grid constraints with a favorable computational efficiency.
Automated Control of Transactive HVACs in Energy Distribution Systems Heating, Ventilation, and Air Conditioning (HVAC) systems contribute significantly to a building’s energy consumption. In the recent years, there is an increased interest in developing transactive approaches which could enable automated and flexible scheduling of HVAC systems based on the customer demand and the electricity prices decided by the suppliers. Flexible and automated scheduling of the ...
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Vision meets robotics: The KITTI dataset We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
Online Palmprint Identification Biometrics-based personal identification is regarded as an effective method for automatically recognizing, with a high confidence, a person's identity. This paper presents a new biometric approach to online personal identification using palmprint technology. In contrast to the existing methods, our online palmprint identification system employs low-resolution palmprint images to achieve effective personal identification. The system consists of two parts: a novel device for online palmprint image acquisition and an efficient algorithm for fast palmprint recognition. A robust image coordinate system is defined to facilitate image alignment for feature extraction. In addition, a 2D Gabor phase encoding scheme is proposed for palmprint feature extraction and representation. The experimental results demonstrate the feasibility of the proposed system.
Theory of Mind for a Humanoid Robot If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a “theory of mind.” This paper presents the theories of Leslie (1994) and Baron-Cohen (1995) on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.
Gravity-Balancing Leg Orthosis and Its Performance Evaluation In this paper, we propose a device to assist persons with hemiparesis to walk by reducing or eliminating the effects of gravity. The design of the device includes the following features: 1) it is passive, i.e., it does not include motors or actuators, but is only composed of links and springs; 2) it is safe and has a simple patient-machine interface to accommodate variability in geometry and inertia of the subjects. A number of methods have been proposed in the literature to gravity-balance a machine. Here, we use a hybrid method to achieve gravity balancing of a human leg over its range of motion. In the hybrid method, a mechanism is used to first locate the center of mass of the human limb and the orthosis. Springs are then added so that the system is gravity-balanced in every configuration. For a quantitative evaluation of the performance of the device, electromyographic (EMG) data of the key muscles, involved in the motion of the leg, were collected and analyzed. Further experiments involving leg-raising and walking tasks were performed, where data from encoders and force-torque sensors were used to compute joint torques. These experiments were performed on five healthy subjects and a stroke patient. The results showed that the EMG activity from the rectus femoris and hamstring muscles with the device was reduced by 75%, during static hip and knee flexion, respectively. For leg-raising tasks, the average torque for static positioning was reduced by 66.8% at the hip joint and 47.3% at the knee joint; however, if we include the transient portion of the leg-raising task, the average torque at the hip was reduced by 61.3%, and at the knee was increased by 2.7% at the knee joints. In the walking experiment, there was a positive impact on the range of movement at the hip and knee joints, especially for the stroke patient: the range of movement increased by 45% at the hip joint and by 85% at the knee joint. We believe that this orthosis can be potentially used to desig- - n rehabilitation protocols for patients with stroke
Biologically-inspired soft exosuit. In this paper, we present the design and evaluation of a novel soft cable-driven exosuit that can apply forces to the body to assist walking. Unlike traditional exoskeletons which contain rigid framing elements, the soft exosuit is worn like clothing, yet can generate moments at the ankle and hip with magnitudes of 18% and 30% of those naturally generated by the body during walking, respectively. Our design uses geared motors to pull on Bowden cables connected to the suit near the ankle. The suit has the advantages over a traditional exoskeleton in that the wearer's joints are unconstrained by external rigid structures, and the worn part of the suit is extremely light, which minimizes the suit's unintentional interference with the body's natural biomechanics. However, a soft suit presents challenges related to actuation force transfer and control, since the body is compliant and cannot support large pressures comfortably. We discuss the design of the suit and actuation system, including principles by which soft suits can transfer force to the body effectively and the biological inspiration for the design. For a soft exosuit, an important design parameter is the combined effective stiffness of the suit and its interface to the wearer. We characterize the exosuit's effective stiffness, and present preliminary results from it generating assistive torques to a subject during walking. We envision such an exosuit having broad applicability for assisting healthy individuals as well as those with muscle weakness.
An efficient scheduling scheme for mobile charger in on-demand wireless rechargeable sensor networks. Existing studies on wireless sensor networks (WSNs) have revealed that the limited battery capacity of sensor nodes (SNs) hinders their perpetual operation. Recent findings in the domain of wireless energy transfer (WET) have attracted a lot of attention of academia and industry to cater the lack of energy in the WSNs. The main idea of WET is to restore the energy of SNs using one or more wireless mobile chargers (MCs), which leads to a new paradigm of wireless rechargeable sensor networks (WRSNs). The determination of an optimal order of charging the SNs (i.e., charging schedule) in an on-demand WRSN is a well-known NP-hard problem. Moreover, care must be taken while designing the charging schedule of an MC as requesting SNs introduce both spatial and temporal constraints. In this paper, we first present a Linear Programming (LP) formulation for the problem of scheduling an MC and then propose an efficient solution based on gravitational search algorithm (GSA). Our method is presented with a novel agent representation scheme and an efficient fitness function. We perform extensive simulations on the proposed scheme to demonstrate its effectiveness over two state-of-the-art algorithms, namely first come first serve (FCFS) and nearest job next with preemption (NJNP). The simulation results reveal that the proposed scheme outperforms both the existing algorithms in terms of charging latency. The virtue of our scheme is also proved by the well-known statistical test, analysis of variance (ANOVA), followed by post hoc analysis.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.1
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
0
0
0
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Motivations for Play in Online Games. An empirical model of player motivations in online games provides the foundation to understand and assess how players differ from one another and how motivations of play relate to age, gender, usage patterns, and in-game behaviors. In the current study, a factor analytic approach was used to create an empirical model of player motivations. The analysis revealed 10 motivation subcomponents that grouped into three overarching components (achievement, social, and immersion). Relationships between motivations and demographic variables (age, gender, and usage patterns) are also presented.
Acceptance of game-based learning by secondary school teachers The adoption and the effectiveness of game-based learning depend largely on the acceptance by classroom teachers, as they can be considered the true change agents of the schools. Therefore, we need to understand teachers' perceptions and beliefs that underlie their decision-making processes. The present study focuses on the factors that influence the acceptance of commercial video games as learning tools in the classroom. A model for describing the acceptance and predicting the uptake of commercial games by secondary school teachers is suggested. Based on data gathered from 505 teachers, the model is tested and evaluated. The results are then linked to previous research in the domains of technology acceptance and game-based learning. Highlights¿ We examine 505 secondary school teachers' acceptance of game-based learning. ¿ We propose, test and evaluate a model for understanding and predicting acceptance. ¿ Teacher beliefs about the use of commercial games appear to be rather complex. ¿ The proposed model explains 57% of the variance in teachers' behavioral intention. ¿ Complexity and experience do not affect behavioral intention in the model.
Influence Of Gamification On Students' Motivation In Using E-Learning Applications Based On The Motivational Design Model Students' motivation is an important factor in ensuring the success of e-learning implementation. In order to ensure students is motivated to use e-learning, motivational design has been used during the development process of e-learning applications. The use of gamification in learning context can help to increase student motivation. The ARCS+G model of motivational design is used as a guide for the gamification of learning. This study focuses on the influence of gamification on students' motivation in using e-learning applications based on the ARCS+G model. Data from the Instructional Materials Motivation Scale (IMMS) questionnaire, were gathered and analyzed for comparison of two groups (one control and one experimental) in attention, relevance, confidence, and satisfaction categories. Based on the result of analysis, students from the experimental group are more motivated to use e-learning applications compared with the controlled group. This proves that gamification affect students' motivation when used in e-learning applications.
Design and Development of a Social, Educational and Affective Robot In this paper we describe the approach and the initial results obtained in the design and implementation of a social and educational robot called Wolly. We involved kids as co-designer helping us in shaping form and behavior of the robot, then we proceeded with the design and implementation of the hardware and software components, characterizing the robot with interactive, adaptive and affective features.
What Hinders Teachers in Using Computer and Video Games in the Classroom? Exploring Factors Inhibiting the Uptake of Computer and Video Games. The purpose of this study is to identify factors inhibiting teachers' use of computer and video games in the classroom setting and to examine the degree to which teaching experience and gender affect attitudes toward using games. Six factors that hinder teachers' use of games in the classroom were discovered: Inflexibility of curriculum, Negative effects of gaming, Students' lack of readiness, Lack of supporting materials, Fixed class schedules, and Limited budgets. Lack of supporting material, Fixed class schedules, and Limited budgets were factors that female teachers believed to be more serious obstacles to game use in the classroom than male teachers did. Experienced teachers, more so than inexperienced teachers, believed that adopting games in teaching was hindered by Inflexibility of curriculum and Negative effects of gaming. On the other hand, inexperienced teachers, more so than experienced teachers, believed that adopting games in teaching is less hindered by Lack of supporting materials and Fixed class schedules.
Response time in man-computer conversational transactions The literature concerning man-computer transactions abounds in controversy about the limits of "system response time" to a user's command or inquiry at a terminal. Two major semantic issues prohibit resolving this controversy. One issue centers around the question of "Response time to what?" The implication is that different human purposes and actions will have different acceptable or useful response times.
EDUCO - A Collaborative Learning Environment Based on Social Navigation Web-based learning is primarily a lonesome activity, even when it involves working in groups. This is due to the fact that the majority of web-based learning relies on asynchronous forms of interacting with other people. In most of the cases, the chat discussion is the only form of synchronous interaction that adds to the feeling that there are other people present in the environment. EDUCO is a system that tries to bring in the sense of other users in a collaborative learning environment by making the other users and their the navigation visible to everyone else in the environment in real-time. The paper describes EDUCO and presents the first empirical evaluation as EDUCO was used in a university course.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
Reinforcement learning of motor skills with policy gradients. Autonomous learning is one of the hallmarks of human and animal behavior, and understanding the principles of learning will be crucial in order to achieve true autonomy in advanced machines like humanoid robots. In this paper, we examine learning of complex motor skills with human-like limbs. While supervised learning can offer useful tools for bootstrapping behavior, e.g., by learning from demonstration, it is only reinforcement learning that offers a general approach to the final trial-and-error improvement that is needed by each individual acquiring a skill. Neither neurobiological nor machine learning studies have, so far, offered compelling results on how reinforcement learning can be scaled to the high-dimensional continuous state and action spaces of humans or humanoids. Here, we combine two recent research developments on learning motor control in order to achieve this scaling. First, we interpret the idea of modular motor control by means of motor primitives as a suitable way to generate parameterized control policies for reinforcement learning. Second, we combine motor primitives with the theory of stochastic policy gradient learning, which currently seems to be the only feasible framework for reinforcement learning for humanoids. We evaluate different policy gradient methods with a focus on their applicability to parameterized motor primitives. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.
Development of a UAV-LiDAR System with Application to Forest Inventory We present the development of a low-cost Unmanned Aerial Vehicle-Light Detecting and Ranging (UAV-LiDAR) system and an accompanying workflow to produce 3D point clouds. UAV systems provide an unrivalled combination of high temporal and spatial resolution datasets. The TerraLuma UAV-LiDAR system has been developed to take advantage of these properties and in doing so overcome some of the current limitations of the use of this technology within the forestry industry. A modified processing workflow including a novel trajectory determination algorithm fusing observations from a GPS receiver, an Inertial Measurement Unit (IMU) and a High Definition (HD) video camera is presented. The advantages of this workflow are demonstrated using a rigorous assessment of the spatial accuracy of the final point clouds. It is shown that due to the inclusion of video the horizontal accuracy of the final point cloud improves from 0.61 m to 0.34 m (RMS error assessed against ground control). The effect of the very high density point clouds (up to 62 points per m(2)) produced by the UAV-LiDAR system on the measurement of tree location, height and crown width are also assessed by performing repeat surveys over individual isolated trees. The standard deviation of tree height is shown to reduce from 0.26 m, when using data with a density of 8 points per m(2), to 0.15 m when the higher density data was used. Improvements in the uncertainty of the measurement of tree location, 0.80 m to 0.53 m, and crown width, 0.69 m to 0.61 m are also shown.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.055548
0.05
0.05
0.05
0.05
0.023042
0.00125
0.000058
0
0
0
0
0
0
Recent Trends in Deep Learning Based Natural Language Processing [Review Article]. Deep learning methods employ multiple processing layers to learn hierarchical representations of data, and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP t...
Performance of Massive MIMO Uplink with Zero-Forcing receivers under Delayed Channels. In this paper, we analyze the performance of the uplink communication of massive multicell multiple-input multiple-output (MIMO) systems under the effects of pilot contamination and delayed channels because of terminal mobility. The base stations (BSs) estimate the channels through the uplink training and then use zero-forcing (ZF) processing to decode the transmit signals from the users. The prob...
HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning Federated learning has emerged as a promising approach for collaborative and privacy-preserving learning. Participants in a federated learning process cooperatively train a model by exchanging model parameters instead of the actual training data, which they might want to keep private. However, parameter interaction and the resulting model still might disclose information about the training data used. To address these privacy concerns, several approaches have been proposed based on differential privacy and secure multiparty computation (SMC), among others. They often result in large communication overhead and slow training time. In this paper, we propose HybridAlpha, an approach for privacy-preserving federated learning employing an SMC protocol based on functional encryption. This protocol is simple, efficient and resilient to participants dropping out. We evaluate our approach regarding the training time and data volume exchanged using a federated learning process to train a CNN on the MNIST data set. Evaluation against existing crypto-based SMC solutions shows that HybridAlpha can reduce the training time by 68% and data transfer volume by 92% on average while providing the same model performance and privacy guarantees as the existing solutions.
Federated Learning Over Noisy Channels: Convergence Analysis and Design Examples Does Federated Learning (FL) work when <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">both</i> uplink and downlink communications have errors? How much communication noise can FL handle and what is its impact on the learning performance? This work is devoted to answering these practically important questions by explicitly incorporating both uplink and downlink noisy channels in the FL pipeline. We present several novel convergence analyses of FL over simultaneous uplink and downlink noisy communication channels, which encompass full and partial clients participation, direct model and model differential transmissions, and non-independent and identically distributed (IID) local datasets. These analyses characterize the sufficient conditions for FL over noisy channels to have the same convergence behavior as the ideal case of no communication error. More specifically, in order to maintain the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {O}({1}/{T})$ </tex-math></inline-formula> convergence rate of FED AVG with perfect communications, the uplink and downlink signal-to-noise ratio (SNR) for direct model transmissions should be controlled such that they scale as <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {O}(t^{2})$ </tex-math></inline-formula> where <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${t}$ </tex-math></inline-formula> is the index of communication rounds, but can stay <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {O}(1)$ </tex-math></inline-formula> (i.e., constant) for model differential transmissions. The key insight of these theoretical results is a “flying under the radar” principle – stochastic gradient descent (SGD) is an inherent noisy process and uplink/downlink communication noises can be tolerated as long as they do not dominate the time-varying SGD noise. We exemplify these theoretical findings with two widely adopted communication techniques – transmit power control and receive diversity combining – and further validate their performance advantages over the standard methods via numerical experiments using several real-world FL tasks.
Resource-Constrained Federated Edge Learning With Heterogeneous Data: Formulation and Analysis Efficient collaboration between collaborative machine learning and wireless communication technology, forming a Federated Edge Learning (FEEL), has spawned a series of next-generation intelligent applications. However, due to the openness of network connections, the FEEL framework generally involves hundreds of remote devices (or clients), resulting in expensive communication costs, which is not friendly to resource-constrained FEEL. To address this issue, we propose a distributed approximate Newton-type algorithm with fast convergence speed to alleviate the problem of FEEL resource (in terms of communication resources) constraints. Specifically, the proposed algorithm is improved based on distributed L-BFGS algorithm and allows each client to approximate the high-cost Hessian matrix by computing the low-cost Fisher matrix in a distributed manner to find a “better” descent direction, thereby speeding up convergence. Second, we prove that the proposed algorithm has linear convergence in strongly convex and non-convex cases and analyze its computational and communication complexity. Similarly, due to the heterogeneity of the connected remote devices, FEEL faces the challenge of heterogeneous data and non-IID (Independent and Identically Distributed) data. To this end, we design a simple but elegant training scheme, namely FedOVA (Federated One-vs-All), to solve the heterogeneous statistical challenge brought by heterogeneous data. In this way, FedOVA first decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning. In particular, the scheme can be well integrated with our communication efficient algorithm to serve FEEL. Numerical results verify the effectiveness and superiority of the proposed algorithm.
Secure Federated Learning in 5G Mobile Networks Machine Learning (ML) is an important enabler for optimizing, securing and managing mobile networks. This leads to increased collection and processing of data from network functions, which in turn may increase threats to sensitive end-user information. Consequently, mechanisms to reduce threats to end-user privacy are needed to take full advantage of ML. We seamlessly integrate Federated Learning (FL) into the 3GPP5G Network Data Analytics (NWDA) architecture, and add a Multi-Party Computation (MPC) protocol for protecting the confidentiality of local updates. We evaluate the protocol and find that it has much lower communication overhead than previous work, without affecting ML performance.
Federated Learning via Over-the-Air Computation The stringent requirements for low-latency and privacy of the emerging high-stake applications with intelligent devices such as drones and smart vehicles make the cloud computing inapplicable in these scenarios. Instead, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">edge machine learning</italic> becomes increasingly attractive for performing training and inference directly at network edges without sending data to a centralized data center. This stimulates a nascent field termed as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">federated learning</italic> for training a machine learning model on computation, storage, energy and bandwidth limited mobile devices in a distributed manner. To preserve data privacy and address the issues of unbalanced and non-IID data points across different devices, the federated averaging algorithm has been proposed for global model aggregation by computing the weighted average of locally updated model at each selected device. However, the limited communication bandwidth becomes the main bottleneck for aggregating the locally computed updates. We thus propose a novel <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">over-the-air computation</italic> based approach for fast global model aggregation via exploring the superposition property of a wireless multiple-access channel. This is achieved by joint device selection and beamforming design, which is modeled as a sparse and low-rank optimization problem to support efficient algorithms design. To achieve this goal, we provide a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect the fixed-rank constraint in the procedure of device selection. A DC algorithm is further developed to solve the resulting DC program with global convergence guarantees. The algorithmic advantages and admirable performance of the proposed methodologies are demonstrated through extensive numerical results.
A survey on sensor networks The advancement in wireless communications and electronics has enabled the development of low-cost sensor networks. The sensor networks can be used for various application areas (e.g., health, military, home). For different application areas, there are different technical issues that researchers are currently resolving. The current state of the art of sensor networks is captured in this article, where solutions are discussed under their related protocol stack layer sections. This article also points out the open research issues and intends to spark new interests and developments in this field.
Toward Integrating Vehicular Clouds with IoT for Smart City Services Vehicular ad hoc networks, cloud computing, and the Internet of Things are among the emerging technology enablers offering a wide array of new application possibilities in smart urban spaces. These applications consist of smart building automation systems, healthcare monitoring systems, and intelligent and connected transportation, among others. The integration of IoT-based vehicular technologies will enrich services that are eventually going to ignite the proliferation of exciting and even more advanced technological marvels. However, depending on different requirements and design models for networking and architecture, such integration needs the development of newer communication architectures and frameworks. This work proposes a novel framework for architectural and communication design to effectively integrate vehicular networking clouds with IoT, referred to as VCoT, to materialize new applications that provision various IoT services through vehicular clouds. In this article, we particularly put emphasis on smart city applications deployed, operated, and controlled through LoRaWAN-based vehicular networks. LoraWAN, being a new technology, provides efficient and long-range communication possibilities. The article also discusses possible research issues in such an integration including data aggregation, security, privacy, data quality, and network coverage. These issues must be addressed in order to realize the VCoT paradigm deployment, and to provide insights for investors and key stakeholders in VCoT service provisioning. The article presents deep insights for different real-world application scenarios (i.e., smart homes, intelligent traffic light, and smart city) using VCoT for general control and automation along with their associated challenges. It also presents initial insights, through preliminary results, regarding data and resource management in IoT-based resource constrained environments through vehicular clouds.
Distributed multirobot localization In this paper, we present a new approach to the problem of simultaneously localizing a group of mobile robots capable of sensing one another. Each of the robots collects sensor data regarding its own motion and shares this information with the rest of the team during the update cycles. A single estimator, in the form of a Kalman filter, processes the available positioning information from all the members of the team and produces a pose estimate for every one of them. The equations for this centralized estimator can be written in a decentralized form, therefore allowing this single Kalman filter to be decomposed into a number of smaller communicating filters. Each of these filters processes the sensor data collected by its host robot. Exchange of information between the individual filters is necessary only when two robots detect each other and measure their relative pose. The resulting decentralized estimation schema, which we call collective localization, constitutes a unique means for fusing measurements collected from a variety of sensors with minimal communication and processing requirements. The distributed localization algorithm is applied to a group of three robots and the improvement in localization accuracy is presented. Finally, a comparison to the equivalent decentralized information filter is provided.
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading in Wireless Sensor Networks In this paper, a three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer. The framework employs distributed load balanced clustering and dual data uploading, which is referred to as LBC-DDU. The objective is to achieve good scalability, long network lifetime and low data collection latency. At the sensor layer, a distributed load balanced clustering (LBC) algorithm is proposed for sensors to self-organize themselves into clusters. In contrast to existing clustering methods, our scheme generates multiple cluster heads in each cluster to balance the work load and facilitate dual data uploading. At the cluster head layer, the inter-cluster transmission range is carefully chosen to guarantee the connectivity among the clusters. Multiple cluster heads within a cluster cooperate with each other to perform energy-saving inter-cluster communications. Through inter-cluster transmissions, cluster head information is forwarded to SenCar for its moving trajectory planning. At the mobile collector layer, SenCar is equipped with two antennas, which enables two cluster heads to simultaneously upload data to SenCar in each time by utilizing multi-user multiple-input and multiple-output (MU-MIMO) technique. The trajectory planning for SenCar is optimized to fully utilize dual data uploading capability by properly selecting polling points in each cluster. By visiting each selected polling point, SenCar can efficiently gather data from cluster heads and transport the data to the static data sink. Extensive simulations are conducted to evaluate the effectiveness of the proposed LBC-DDU scheme. The results show that when each cluster has at most two cluster heads, LBC-DDU achieves over 50 percent energy saving per node and 60 percent energy saving on cluster heads comparing with data collection through multi-hop relay to the static data sink, and 20 percent - horter data collection time compared to traditional mobile data gathering.
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at this http URL
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.2
0.1
0.00625
0
0
0
0
0
0
0
Energy and Distance Optimization in Rechargeable Wireless Sensor Networks The aim of a mobile recharger operating in a wireless sensor network (WSN) is to keep the network's average consumed energy and covered distance low. As shown in the literature, the covered distance is minimized when the mobile recharger's base is located as per the solution of a median problem, while the network's average energy consumption is minimized as per the solution of a different median problem. In this work, the first problem is analytically investigated, showing that its solution depends on the traffic load and the topology characteristics. Furthermore, it is shown that, under certain conditions, the solution for both problems is identical. These analytical results motivate the introduction of a new on-demand recharging policy, simple to be implemented and depending on local information. The simulation results confirm the analytical findings, showing that the solutions of both median problems are identical under certain conditions in WSN environments. Additionally, the proposed recharging policy is evaluated against a well-known policy that exploits global knowledge, demonstrating its advantage for prolonging network lifetime. For both recharging policies, it is shown that energy consumption and covered distance are minimized when the mobile recharger is initially located at the solution of the said median problems.
Mobility in wireless sensor networks - Survey and proposal. Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time.
Tag-based cooperative data gathering and energy recharging in wide area RFID sensor networks The Wireless Identification and Sensing Platform (WISP) conjugates the identification potential of the RFID technology and the sensing and computing capability of the wireless sensors. Practical issues, such as the need of periodically recharging WISPs, challenge the effective deployment of large-scale RFID sensor networks (RSNs) consisting of RFID readers and WISP nodes. In this view, the paper proposes cooperative solutions to energize the WISP devices in a wide-area sensing network while reducing the data collection delay. The main novelty is the fact that both data transmissions and energy transfer are based on the RFID technology only: RFID mobile readers gather data from the WISP devices, wirelessly recharge them, and mutually cooperate to reduce the data delivery delay to the sink. Communication between mobile readers relies on two proposed solutions: a tag-based relay scheme, where RFID tags are exploited to temporarily store sensed data at pre-determined contact points between the readers; and a tag-based data channel scheme, where the WISPs are used as a virtual communication channel for real time data transfer between the readers. Both solutions require: (i) clustering the WISP nodes; (ii) dimensioning the number of required RFID mobile readers; (iii) planning the tour of the readers under the energy and time constraints of the nodes. A simulative analysis demonstrates the effectiveness of the proposed solutions when compared to non-cooperative approaches. Differently from classic schemes in the literature, the solutions proposed in this paper better cope with scalability issues, which is of utmost importance for wide area networks.
Improving charging capacity for wireless sensor networks by deploying one mobile vehicle with multiple removable chargers. Wireless energy transfer is a promising technology to prolong the lifetime of wireless sensor networks (WSNs), by employing charging vehicles to replenish energy to lifetime-critical sensors. Existing studies on sensor charging assumed that one or multiple charging vehicles being deployed. Such an assumption may have its limitation for a real sensor network. On one hand, it usually is insufficient to employ just one vehicle to charge many sensors in a large-scale sensor network due to the limited charging capacity of the vehicle or energy expirations of some sensors prior to the arrival of the charging vehicle. On the other hand, although the employment of multiple vehicles can significantly improve the charging capability, it is too costly in terms of the initial investment and maintenance costs on these vehicles. In this paper, we propose a novel charging model that a charging vehicle can carry multiple low-cost removable chargers and each charger is powered by a portable high-volume battery. When there are energy-critical sensors to be charged, the vehicle can carry the chargers to charge multiple sensors simultaneously, by placing one portable charger in the vicinity of one sensor. Under this novel charging model, we study the scheduling problem of the charging vehicle so that both the dead duration of sensors and the total travel distance of the mobile vehicle per tour are minimized. Since this problem is NP-hard, we instead propose a (3+ϵ)-approximation algorithm if the residual lifetime of each sensor can be ignored; otherwise, we devise a novel heuristic algorithm, where ϵ is a given constant with 0 < ϵ ≤ 1. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Experimental results show that the performance of the proposed algorithms are very promising.
Speed control of mobile chargers serving wireless rechargeable networks. Wireless rechargeable networks have attracted increasing research attention in recent years. For charging service, a mobile charger is often employed to move across the network and charge all network nodes. To reduce the charging completion time, most existing works have used the “move-then-charge” model where the charger first moves to specific spots and then starts charging nodes nearby. As a result, these works often aim to reduce the moving delay or charging delay at the spots. However, the charging opportunity on the move is largely overlooked because the charger can charge network nodes while moving, which as we analyze in this paper, has the potential to greatly reduce the charging completion time. The major challenge to exploit the charging opportunity is the setting of the moving speed of the charger. When the charger moves slow, the charging delay will be reduced (more energy will be charged during the movement) but the moving delay will increase. To deal with this challenge, we formulate the problem of delay minimization as a Traveling Salesman Problem with Speed Variations (TSP-SV) which jointly considers both charging and moving delay. We further solve the problem using linear programming to generate (1) the moving path of the charger, (2) the moving speed variations on the path and (3) the stay time at each charging spot. We also discuss possible ways to reduce the calculation complexity. Extensive simulation experiments are conducted to study the delay performance under various scenarios. The results demonstrate that our proposed method achieves much less completion time compared to the state-of-the-art work.
A Prediction-Based Charging Policy and Interference Mitigation Approach in the Wireless Powered Internet of Things The Internet of Things (IoT) technology has recently drawn more attention due to its ability to achieve the interconnections of massive physic devices. However, how to provide a reliable power supply to energy-constrained devices and improve the energy efficiency in the wireless powered IoT (WP-IoT) is a twofold challenge. In this paper, we develop a novel wireless power transmission (WPT) system, where an unmanned aerial vehicle (UAV) equipped with radio frequency energy transmitter charges the IoT devices. A machine learning framework of echo state networks together with an improved <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -means clustering algorithm is used to predict the energy consumption and cluster all the sensor nodes at the next period, thus automatically determining the charging strategy. The energy obtained from the UAV by WPT supports the IoT devices to communicate with each other. In order to improve the energy efficiency of the WP-IoT system, the interference mitigation problem is modeled as a mean field game, where an optimal power control policy is presented to adapt and analyze the large number of sensor nodes randomly deployed in WP-IoT. The numerical results verify that our proposed dynamic charging policy effectively reduces the data packet loss rate, and that the optimal power control policy greatly mitigates the interference, and improve the energy efficiency of the whole network.
Coverage and Connectivity Aware Energy Charging Mechanism Using Mobile Charger for WRSNs Wireless recharging using a mobile charger has been widely discussed in recent years. Most of them considered that all sensors were equally important and aimed to maximize the number of recharged sensors. The purpose of energy recharging is to extend the lifetime of sensors whose major work is to maximize the surveillance quality. In a randomly deployed wireless rechargeable sensor network, the surveillance quality highly depends on the contributions of coverage and network connectivity of each sensor. Instead of considering maximizing the number of recharged sensors, this article further takes into consideration the contributions of coverage and network connectivity of each sensor when making the decision of recharging schedule, aiming to maximize the surveillance quality and improve the number of data collected from sensors to the sink node. This article proposes an energy recharging mechanism, called an energy recharging mechanism for maximizing the surveillance quality of a given WRSNs (ERSQ), which partitions the monitoring region into several equal-sized grids and considers the important factors, including coverage contribution, network connectivity contribution, the remaining energy as well as the path length cost of each grid, aiming to maximize surveillance quality for a given wireless sensor network. Performance studies reveal that the proposed ERSQ outperforms existing recharging mechanisms in terms of the coverage, the number of working sensors as well as the effectiveness index of working sensors.
Minimizing the Maximum Charging Delay of Multiple Mobile Chargers Under the Multi-Node Energy Charging Scheme Wireless energy charging has emerged as a very promising technology for prolonging sensor lifetime in wireless rechargeable sensor networks (WRSNs). Existing studies focused mainly on the one-to-one charging scheme that a single sensor can be charged by a mobile charger at each time, this charging scheme however suffers from poor charging scalability and inefficiency. Recently, another charging scheme, the multi-node charging scheme that allows multiple sensors to be charged simultaneously by a mobile charger, becomes dominant, which can mitigate charging scalability and improve charging efficiency. However, most previous studies on this multi-node energy charging scheme focused on the use of a single mobile charger to charge multiple sensors simultaneously. For large scale WRSNs, it is insufficient to deploy only a single mobile charger to charge many lifetime-critical sensors, and consequently sensor expiration durations will increase dramatically. To charge many lifetime-critical sensors in large scale WRSNs as early as possible, it is inevitable to adopt multiple mobile chargers for sensor charging that can not only speed up sensor charging but also reduce expiration times of sensors. This however poses great challenges to fairly schedule the multiple mobile chargers such that the longest charging delay among sensors is minimized. One important constraint is that no sensor can be charged by more than one mobile charger at any time due to the fact that the sensor cannot receive any energy from either of the chargers or the overcharging will damage the recharging battery of the sensor. Thus, finding a closed charge tour for each of the multiple chargers such that the longest charging delay is minimized is crucial. In this paper we address the challenge by formulating a novel longest charging delay minimization problem. We first show that the problem is NP-hard. We then devise the very first approximation algorithm with a provable approximation ratio for the problem. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithm is promising, and outperforms existing algorithms in various settings.
NETWRAP: An NDN Based Real-TimeWireless Recharging Framework for Wireless Sensor Networks Using vehicles equipped with wireless energy transmission technology to recharge sensor nodes over the air is a game-changer for traditional wireless sensor networks. The recharging policy regarding when to recharge which sensor nodes critically impacts the network performance. So far only a few works have studied such recharging policy for the case of using a single vehicle. In this paper, we propose NETWRAP, an N DN based Real Time Wireless Rech arging Protocol for dynamic wireless recharging in sensor networks. The real-time recharging framework supports single or multiple mobile vehicles. Employing multiple mobile vehicles provides more scalability and robustness. To efficiently deliver sensor energy status information to vehicles in real-time, we leverage concepts and mechanisms from named data networking (NDN) and design energy monitoring and reporting protocols. We derive theoretical results on the energy neutral condition and the minimum number of mobile vehicles required for perpetual network operations. Then we study how to minimize the total traveling cost of vehicles while guaranteeing all the sensor nodes can be recharged before their batteries deplete. We formulate the recharge optimization problem into a Multiple Traveling Salesman Problem with Deadlines (m-TSP with Deadlines), which is NP-hard. To accommodate the dynamic nature of node energy conditions with low overhead, we present an algorithm that selects the node with the minimum weighted sum of traveling time and residual lifetime. Our scheme not only improves network scalability but also ensures the perpetual operation of networks. Extensive simulation results demonstrate the effectiveness and efficiency of the proposed design. The results also validate the correctness of the theoretical analysis and show significant improvements that cut the number of nonfunctional nodes by half compared to the static scheme while maintaining the network overhead at the same level.
Hierarchical mesh segmentation based on fitting primitives In this paper, we describe a hierarchical face clustering algorithm for triangle meshes based on fitting primitives belonging to an arbitrary set. The method proposed is completely automatic, and generates a binary tree of clusters, each of which is fitted by one of the primitives employed. Initially, each triangle represents a single cluster; at every iteration, all the pairs of adjacent clusters are considered, and the one that can be better approximated by one of the primitives forms a new single cluster. The approximation error is evaluated using the same metric for all the primitives, so that it makes sense to choose which is the most suitable primitive to approximate the set of triangles in a cluster.Based on this approach, we have implemented a prototype that uses planes, spheres and cylinders, and have experimented that for meshes made of 100 K faces, the whole binary tree of clusters can be built in about 8 s on a standard PC.The framework described here has natural application in reverse engineering processes, but it has also been tested for surface denoising, feature recovery and character skinning.
Movie2Comics: Towards a Lively Video Content Presentation a type of artwork, comics is prevalent and popular around the world. However, despite the availability of assistive software and tools, the creation of comics is still a labor-intensive and time-consuming process. This paper proposes a scheme that is able to automatically turn a movie clip to comics. Two principles are followed in the scheme: 1) optimizing the information preservation of the movie; and 2) generating outputs following the rules and the styles of comics. The scheme mainly contains three components: script-face mapping, descriptive picture extraction, and cartoonization. The script-face mapping utilizes face tracking and recognition techniques to accomplish the mapping between characters' faces and their scripts. The descriptive picture extraction then generates a sequence of frames for presentation. Finally, the cartoonization is accomplished via three steps: panel scaling, stylization, and comics layout design. Experiments are conducted on a set of movie clips and the results have demonstrated the usefulness and the effectiveness of the scheme.
Parallel Multi-Block ADMM with o(1/k) Convergence This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: $$\\begin{aligned} \\text {minimize} ~~&f_1(\\mathbf{x}_1) + \\cdots + f_N(\\mathbf{x}_N)\\\\ \\text {subject to}~~&A_1 \\mathbf{x}_1 ~+ \\cdots + A_N\\mathbf{x}_N =c,\\\\&\\mathbf{x}_1\\in {\\mathcal {X}}_1,~\\ldots , ~\\mathbf{x}_N\\in {\\mathcal {X}}_N, \\end{aligned}$$minimizef1(x1)+ź+fN(xN)subject toA1x1+ź+ANxN=c,x1źX1,ź,xNźXN,where $$N \\ge 2$$Nź2, $$f_i$$fi are convex functions, $$A_i$$Ai are matrices, and $${\\mathcal {X}}_i$$Xi are feasible sets for variable $$\\mathbf{x}_i$$xi. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices $$A_i$$Ai are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices $$A_i$$Ai). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that $$\\Vert {\\mathbf {x}}^{k+1} - {\\mathbf {x}}^k\\Vert _M^2$$źxk+1-xkźM2 converges at a rate of o(1 / k) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with 300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.
Deep Continuous Fusion For Multi-Sensor 3d Object Detection In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
Stochastic QoE-aware optimization of multisource multimedia content delivery for mobile cloud The increasing popularity of mobile video streaming in wireless networks has stimulated growing demands for efficient video streaming services. However, due to the time-varying throughput and user mobility, it is still difficult to provide high quality video services for mobile users. Our proposed optimization method considers key factors such as video quality, bitrate level, and quality variations to enhance quality of experience over wireless networks. The mobile network and device parameters are estimated in order to deliver the best quality video for the mobile user. We develop a rate adaptation algorithm using Lyapunov optimization for multi-source multimedia content delivery to minimize the video rate switches and provide higher video quality. The multi-source manager algorithm is developed to select the best stream based on the path quality for each path. The node joining and cluster head election mechanism update the node information. As the proposed approach selects the optimal path, it also achieves fairness and stability among clients. The quality of experience feature metrics like bitrate level, rebuffering events, and bitrate switch frequency are employed to assess video quality. We also employ objective video quality assessment methods like VQM, MS-SSIM, and SSIMplus for video quality measurement closer to human visual assessment. Numerical results show the effectiveness of the proposed method as compared to the existing state-of-the-art methods in providing quality of experience and bandwidth utilization.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
Dissipativity-Based Filtering for Fuzzy Switched Systems With Stochastic Perturbation. In this technical note, the problem of the dissipativity-based filtering problem is considered for a class of T-S fuzzy switched systems with stochastic perturbation. Firstly, a sufficient condition of strict dissipativity performance is given to guarantee the mean-square exponential stability for the concerned T-S fuzzy switched system. Then, our attention is focused on the design of a filter to the T-S fuzzy switched system with Brownian motion. By combining the average dwell time technique with the piecewise Lyapunov function technique, the desired fuzzy filters are designed that guarantee the filter error dynamic system to be mean-square exponential stable with a strictly dissipative performance, and the corresponding solvability condition for the fuzzy filter is also presented based on the linearization procedure approach. Finally, an example is provided to illustrate the effectiveness of the proposed dissipativity-based filter technique.
Input-to-state stability of switched systems and switching adaptive control In this paper we prove that a switched nonlinear system has several useful input-to-state stable (ISS)-type properties under average dwell-time switching signals if each constituent dynamical system is ISS. This extends available results for switched linear systems. We apply our result to stabilization of uncertain nonlinear systems via switching supervisory control, and show that the plant states can be kept bounded in the presence of bounded disturbances when the candidate controllers provide ISS properties with respect to the estimation errors. Detailed illustrative examples are included.
Dynamic output feedback for switched linear systems based on a LQG design The aim of this paper is to extend the LQG design for linear system to the case of switched linear systems in continuous time. The main result provides a control Lyapunov function and a dynamic output feedback law leading to sub-optimal solutions. Practically, the dynamic output feedback is easy to apply and the design procedure is effective if there exists at least one controllable and observable convex combination of the subsystems. Practical applications concern the large class of power converters.
Robust State-Dependent Switching of Linear Systems With Dwell Time. A state-dependent switching law that obeys a dwell time constraint and guarantees the stability of a switched linear system is designed. Sufficient conditions are obtained for the stability of the switched systems when the switching law is applied in presence of polytopic type parameter uncertainty. A Lyapunov function, in quadratic form, is assigned to each subsystem such that it is non-increasing at the switching instants. During the dwell time, this function varies piecewise linearly in time. After the dwell, the system switches if the switching results in a decrease in the value of the LF. The method proposed is also applicable to robust stabilization via state-feedback. It is further extended to guarantee a bound on the L 2 -gain of the switching system; it is also used in deriving state-feedback control law that robustly achieves a prescribed L 2 -gain bound.
A Nonconservative LMI Condition for Stability of Switched Systems With Guaranteed Dwell Time. Ensuring stability of switched linear systems with a guaranteed dwell time is an important problem in control systems. Several methods have been proposed in the literature to address this problem, but unfortunately they provide sufficient conditions only. This technical note proposes the use of homogeneous polynomial Lyapunov functions in the non-restrictive case where all the subsystems are Hurwitz, showing that a sufficient condition can be provided in terms of an LMI feasibility test by exploiting a key representation of polynomials. Several properties are proved for this condition, in particular that it is also necessary for a sufficiently large degree of these functions. As a result, the proposed condition provides a sequence of upper bounds of the minimum dwell time that approximate it arbitrarily well. Some examples illustrate the proposed approach.
A Probabilistic Approach to Collaborative Multi-Robot Localization This paper presents a statistical algorithm for collaborative mobile robot localization. Our approach uses a sample-based version of Markov localization, capable of localizing mobile robots in an any-time fashion. When teams of robots localize themselves in the same environment, probabilistic methods are employed to synchronize each robot's belief whenever one robot detects another. As a result, the robots localize themselves faster, maintain higher accuracy, and high-cost sensors are amortized across multiple robot platforms. The technique has been implemented and tested using two mobile robots equipped with cameras and laser range-finders for detecting other robots. The results, obtained with the real robots and in series of simulation runs, illustrate drastic improvements in localization speed and accuracy when compared to conventional single-robot localization. A further experiment demonstrates that under certain conditions, successful localization is only possible if teams of heterogeneous robots collaborate during localization.
Preasymptotic Stability and Homogeneous Approximations of Hybrid Dynamical Systems Hybrid dynamical systems are systems that combine features of continuous-time dynamical systems and discrete-time dynamical systems, and can be modeled by a combination of differential equations or inclusions, difference equations or inclusions, and constraints. Preasymptotic stability is a concept that results from separating the conditions that asymptotic stability places on the behavior of solutions from issues related to existence of solutions. In this paper, techniques for approximating hybrid dynamical systems that generalize classical linearization techniques are proposed. The approximation techniques involve linearization, tangent cones, homogeneous approximations of functions and set-valued mappings, and tangent homogeneous cones, where homogeneity is considered with respect to general dilations. The main results deduce preasymptotic stability of an equilibrium point for a hybrid dynamical system from preasymptotic stability of the equilibrium point for an approximate system. Further results relate the degree of homogeneity of a hybrid system to the Zeno phenomenon that can appear in the solutions of the system.
Containment Control in Mobile Networks In this paper, the problem of driving a collection of mobile robots to a given target destination is studied. In particular, we are interested in achieving this transfer in an orderly manner so as to ensure that the agents remain in the convex polytope spanned by the leader-agents, while the remaining agents, only employ local interaction rules. To this aim we exploit the theory of partial difference equations and propose hybrid control schemes based on stop-go rules for the leader-agents. Non-Zenoness, liveness and convergence of the resulting system are also analyzed.
Parameter tuning for configuring and analyzing evolutionary algorithms In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a three-tier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instances, parameters, and EA performance measures as major factors, and discuss how tuning can be directed to algorithm performance and/or robustness. For the survey part we establish different taxonomies to categorize tuning methods and review existing work. Finally, we elaborate on how tuning can improve methodology by facilitating well-funded experimental comparisons and algorithm analysis.
Tour Planning for Mobile Data-Gathering Mechanisms in Wireless Sensor Networks In this paper, we propose a new data-gathering mechanism for large-scale wireless sensor networks by introducing mobility into the network. A mobile data collector, for convenience called an M-collector in this paper, could be a mobile robot or a vehicle equipped with a powerful transceiver and battery, working like a mobile base station and gathering data while moving through the field. An M-collector starts the data-gathering tour periodically from the static data sink, polls each sensor while traversing its transmission range, then directly collects data from the sensor in single-hop communications, and finally transports the data to the static sink. Since data packets are directly gathered without relays and collisions, the lifetime of sensors is expected to be prolonged. In this paper, we mainly focus on the problem of minimizing the length of each data-gathering tour and refer to this as the single-hop data-gathering problem (SHDGP). We first formalize the SHDGP into a mixed-integer program and then present a heuristic tour-planning algorithm for the case where a single M-collector is employed. For the applications with strict distance/time constraints, we consider utilizing multiple M-collectors and propose a data-gathering algorithm where multiple M-collectors traverse through several shorter subtours concurrently to satisfy the distance/time constraints. Our single-hop mobile data-gathering scheme can improve the scalability and balance the energy consumption among sensors. It can be used in both connected and disconnected networks. Simulation results demonstrate that the proposed data-gathering algorithm can greatly shorten the moving distance of the collectors compared with the covering line approximation algorithm and is close to the optimal algorithm for small networks. In addition, the proposed data-gathering scheme can significantly prolong the network lifetime compared with a network with static data sink or a network in which the mobile collector c- n only move along straight lines.
Learning A Discriminative Null Space For Person Re-Identification Most existing person re-identification (re-id) methods focus on learning the optimal distance metrics across camera views. Typically a person's appearance is represented using features of thousands of dimensions, whilst only hundreds of training samples are available due to the difficulties in collecting matched training images. With the number of training samples much smaller than the feature dimension, the existing methods thus face the classic small sample size (SSS) problem and have to resort to dimensionality reduction techniques and/or matrix regularisation, which lead to loss of discriminative power. In this work, we propose to overcome the SSS problem in re-id distance metric learning by matching people in a discriminative null space of the training data. In this null space, images of the same person are collapsed into a single point thus minimising the within-class scatter to the extreme and maximising the relative between-class separation simultaneously. Importantly, it has a fixed dimension, a closed-form solution and is very efficient to compute. Extensive experiments carried out on five person re-identification benchmarks including VIPeR, PRID2011, CUHK01, CUHK03 and Market1501 show that such a simple approach beats the state-of-the-art alternatives, often by a big margin.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.014041
0.014819
0.013582
0.013582
0.006813
0.004653
0.00024
0.000028
0
0
0
0
0
0
Control for gravity compensation in tendon-driven upper limb exosuits Soft wearable robots, or exosuits, are a promising technology to assist the upper limb during daily life activities. So far, several exosuit concepts have been proposed, some of which were successfully tested in open-loop control. However, though simple and robust, open-loop control is cumbersome and unintuitive for use in daily life. Here, we closed the control loop on the human-robot interface of the Myoshirt. The Myoshirt is an upper limb exosuit that supports the shoulder joint during functional arm elevation. A direct force controller (DF) as well as an indirect force controller (IF) were implemented on the Myoshirt to assess their suitability for autonomously tracking human movement. In a preceding testbench analysis, a direct force controller with linear friction compensation (DFF) could be excluded, as linearly compensating friction aggravated the force tracking error in the ramp response (RMSE mean|sd: $32.75 \mid 10.95 \mathrm{N}$) in comparison to the DF controller ramp response $(27.61 \mid 9.38 \mathrm{N})$. In the same analysis, the IF controller showed substantially better tracking performance $(17.12 \mid 0.99 \mathrm{N})$. In the subsequent movement tracking analysis including five participants (one female), the position tracking error and smoothness (median(RMSE), median(SPARC)) were similar with the DF $\left(3.9^{\circ},-4.3\right)$ and IF $\left(3.4^{\circ},-4.1\right)$ controllers and in an unpowered condition $\left(3.7^{\circ},-4.2\right)$. However, the force tracking error and smoothness were substantially better when the IF controller $(3.4 \mathrm{N},-4.5)$ was active than with the DF controller $(10.4 \mathrm{N},-6.6)$. The magnitude response in the Bode analysis indicated that both controllers were obstructing the human movement at higher frequencies, however with 0.78 Hz, the IF controller satisfied the bandwidth requirement for daily life assistance, while the DF controller $(0.63 \mathrm{Hz})$ did not. It can be concluded that the IF controller is most suitable for assisting human movement in daily life with the Myoshirt.
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Sensing pressure distribution on a lower-limb exoskeleton physical human-machine interface. A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer's skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented.
A soft wearable robotic device for active knee motions using flat pneumatic artificial muscles We present the design of a soft wearable robotic device composed of elastomeric artificial muscle actuators and soft fabric sleeves, for active assistance of knee motions. A key feature of the device is the two-dimensional design of the elastomer muscles that not only allows the compactness of the device, but also significantly simplifies the manufacturing process. In addition, the fabric sleeves make the device lightweight and easily wearable. The elastomer muscles were characterized and demonstrated an initial contraction force of 38N and maximum contraction of 18mm with 104kPa input pressure, approximately. Four elastomer muscles were employed for assisted knee extension and flexion. The robotic device was tested on a 3D printed leg model with an articulated knee joint. Experiments were conducted to examine the relation between systematic change in air pressure and knee extension-flexion. The results showed maximum extension and flexion angles of 95° and 37°, respectively. However, these angles are highly dependent on underlying leg mechanics and positions. The device was also able to generate maximum extension and flexion forces of 3.5N and 7N, respectively.
Robotic Artificial Muscles: Current Progress and Future Perspectives Robotic artificial muscles are a subset of artificial muscles that are capable of producing biologically inspired motions useful for robot systems, i.e., large power-to-weight ratios, inherent compliance, and large range of motions. These actuators, ranging from shape memory alloys to dielectric elastomers, are increasingly popular for biomimetic robots as they may operate without using complex linkage designs or other cumbersome mechanisms. Recent achievements in fabrication, modeling, and control methods have significantly contributed to their potential utilization in a wide range of applications. However, no survey paper has gone into depth regarding considerations pertaining to their selection, design, and usage in generating biomimetic motions. In this paper, we discuss important characteristics and considerations in the selection, design, and implementation of various prominent and unique robotic artificial muscles for biomimetic robots, and provide perspectives on next-generation muscle-powered robots.
Development of muscle suit for upper limb We have been developing a "muscle suit" that provides muscular support to the paralyzed or those otherwise unable to move unaided, as well as to manual workers. The muscle suit is a garment without a metal frame and uses a McKibben actuator driven by compressed air. Because actuators are sewn into the garment, no metal frame is needed, making the muscle suit very light and cheap. With the muscle suit, the patient can willfully control his or her movement. The muscle suit is very helpful for both muscular and emotional support. We propose an armor-type muscle suit in order to overcome issues of a prototype system and then show how abduction motion, which we believe, is the most difficult motion for the upper body, is realized.
Power Assist System HAL-3 for Gait Disorder Person We have developed the power assistive suit, HAL (Hybrid Assistive Leg) which provide the self-walking aid for gait disorder persons or aged persons. In this paper, We introduce HAL-3 system, improving HAL-1,2 systems which had developed previously. EMG signal was used as the input information of power assist controller. We propose a calibration method to identify parameters which relates the EMG to joint torque by using HAL-3. We could obtain suitable torque estimated by EMG and realize an apparatus that enables power to be used for walking and standing up according to the intention of the operator.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Theory and Experiment on Formation-Containment Control of Multiple Multirotor Unmanned Aerial Vehicle Systems. Formation-containment control problems for multiple multirotor unmanned aerial vehicle (UAV) systems with directed topologies are studied, where the states of leaders form desired formation and the states of followers converge to the convex hull spanned by those of the leaders. First, formation-containment protocols are constructed based on the neighboring information of UAVs. Then, sufficient con...
Response time in man-computer conversational transactions The literature concerning man-computer transactions abounds in controversy about the limits of "system response time" to a user's command or inquiry at a terminal. Two major semantic issues prohibit resolving this controversy. One issue centers around the question of "Response time to what?" The implication is that different human purposes and actions will have different acceptable or useful response times.
Human Shoulder Modeling Including Scapulo-Thoracic Constraint And Joint Sinus Cones In virtual human modeling, the shoulder is usually composed of clavicular, scapular and arm segments related by rotational joints. Although the model is improved, the realistic animation of the shoulder is hardly achieved. This is due to the fact that it is difficult to coordinate the simultaneous motion of the shoulder components in a consistent way. Also, the common use of independent one-degree of freedom (DOF) joint hierarchies does not properly render the 3-D accessibility space of real joints. On the basis of former biomechanical investigations, we propose here an extended shoulder model including scapulo-thoracic constraint and joint sinus cones. As a demonstration, the model is applied, using inverse kinematics, to the animation of a 3-D anatomic muscled skeleton model. (C) 2000 Elsevier Science Ltd. All rights reserved.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.213778
0.213778
0.213778
0.213778
0.213778
0.106889
0.023774
0
0
0
0
0
0
0
Precoding and Power Optimization in Cell-Free Massive MIMO Systems. Cell-free Massive multiple-input multiple-output (MIMO) comprises a large number of distributed low-cost low-power single antenna access points (APs) connected to a network controller. The number of AP antennas is significantly larger than the number of users. The system is not partitioned into cells and each user is served by all APs simultaneously. The simplest linear precoding schemes are conju...
Sub-modularity and Antenna Selection in MIMO systems In this paper, we show that the optimal receive antenna subset selection problem for maximizing the mutual information in a point-to-point MIMO system is sub-modular. Consequently, a greedy step-wise optimization approach, where at each step, an antenna that maximizes the incremental gain is added to the existing antenna subset, is guaranteed to be within a (1-1/e)-fraction of the global optimal value independent of all parameters. For a single-antenna-equipped source and destination with multiple relays, we show that the relay antenna selection problem to maximize the mutual information is modular and a greedy step-wise optimization approach leads to an optimal solution.
Achievable Rates of Full-Duplex MIMO Radios in Fast Fading Channels With Imperfect Channel Estimation We study the theoretical performance of two full-duplex multiple-input multiple-output (MIMO) radio systems: a full-duplex bi-directional communication system and a full-duplex relay system. We focus on the effect of a (digitally manageable) residual self-interference due to imperfect channel estimation (with independent and identically distributed (i.i.d.) Gaussian channel estimation error) and transmitter noise. We assume that the instantaneous channel state information (CSI) is not available the transmitters. To maximize the system ergodic mutual information, which is a nonconvex function of power allocation vectors at the nodes, a gradient projection algorithm is developed to optimize the power allocation vectors. This algorithm exploits both spatial and temporal freedoms of the source covariance matrices of the MIMO links between transmitters and receivers to achieve higher sum ergodic mutual information. It is observed through simulations that the full-duplex mode is optimal when the nominal self-interference is low, and the half-duplex mode is optimal when the nominal self-interference is high. In addition to an exact closed-form ergodic mutual information expression, we introduce a much simpler asymptotic closed-form ergodic mutual information expression, which in turn simplifies the computation of the power allocation vectors.
Low-Complexity Beam Allocation for Switched-Beam Based Multiuser Massive MIMO Systems. This paper addresses the beam allocation problem in a switched-beam based massive multiple-input-multiple-output (MIMO) system working at the millimeter wave frequency band, with the target of maximizing the sum data rate. This beam allocation problem can be formulated as a combinatorial optimization problem under two constraints that each user uses at most one beam for its data transmission and each beam serves at most one user. The brute-force search is a straightforward method to solve this optimization problem. However, for a massive MIMO system with a large number of beams $N$ , the brute-force search results in intractable complexity $O(N^{K})$ , where $K$ is the number of users. In this paper, in order to solve the beam allocation problem with affordable complexity, a suboptimal low-complexity beam allocation (LBA) algorithm is developed based on submodular optimization theory, which has been shown to be a powerful tool for solving combinatorial optimization problems. Simulation results show that our proposed LBA algorithm achieves nearly optimal sum data rate with complexity $O(K\\log N)$ . Furthermore, the average service ratio, i.e., the ratio of the number of users being served to the total number of users, is theoretically analyzed and derived as an explicit function of the ratio $N/K$ .
Dynamic TDD Systems for 5G and Beyond: A Survey of Cross-Link Interference Mitigation Dynamic time division duplex (D-TDD) dynamically allocates the transmission directions for traffic adaptation in each cell. D-TDD systems are receiving a lot of attention because they can reduce latency and increase spectrum utilization via flexible and dynamic duplex operation in 5G New Radio (NR). However, the advantages of the D-TDD system are difficult to fully utilize due to the cross-link interference (CLI) arising from the use of different transmission directions between adjacent cells. This paper is a survey of the research from academia and the standardization efforts being undertaken to solve this CLI problem and make the D-TDD system a reality. Specifically, we categorize and present the approaches to mitigating CLI according to operational principles. Furthermore, we present the signaling necessary to apply the CLI mitigation schemes. We also present information-theoretic performance analysis of D-TDD systems in various environments. As topics for future works, we discuss the research challenges and opportunities associated with the CLI mitigation schemes and signaling design in a variety of environments. This survey is recommended for those who are in the initial stage of studying D-TDD systems and those who wish to develop a more feasible D-TDD system as a baseline for reviewing the research flow and standardization trends surrounding D-TDD systems and to identify areas of focus for future works.
Dynamic Time-Frequency Division Duplex In this paper, we introduce dynamic time-frequency-division duplex (D-TFDD), which is a novel duplexing scheme that combines time-division duplex (TDD) and frequency-division duplex (FDD). In D-TFDD, a user receives from the base station (BS) on the downlink in one frequency band and transmits to the BS on the uplink in another frequency band, as in FDD. Next, the user shares its uplink transmissi...
Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency AbstractMassive multiple-input multiple-output MIMO is one of themost promising technologies for the next generation of wirelesscommunication networks because it has the potential to providegame-changing improvements in spectral efficiency SE and energyefficiency EE. This monograph summarizes many years ofresearch insights in a clear and self-contained way and providesthe reader with the necessary knowledge and mathematical toolsto carry out independent research in this area. Starting froma rigorous definition of Massive MIMO, the monograph coversthe important aspects of channel estimation, SE, EE, hardwareefficiency HE, and various practical deployment considerations.From the beginning, a very general, yet tractable, canonical systemmodel with spatial channel correlation is introduced. This modelis used to realistically assess the SE and EE, and is later extendedto also include the impact of hardware impairments. Owing tothis rigorous modeling approach, a lot of classic "wisdom" aboutMassive MIMO, based on too simplistic system models, is shownto be questionable.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
An optimal parallel algorithm for the minimum circle-cover problem Given a set of n circular arcs, the problem of finding a minimum number of circular arcs whose union covers the whole circle has been considered both in sequential and parallel computational models. Here we present a parallel algorithm in the EREW PRAM model that runs in O(log n) time using O(n) processors if the arcs are not given already sorted, and using O(n/log n) processors otherwise. Our algorithm is optimal since the problem has an Ω(n log n) lower bound for the unsorted-arcs case, and an Ω(n) lower bound for the sorted-arcs case. The previous best known parallel algorithm runs in O(log n) time using O(n2) processors, in the worst case, in the CREW PRAM model.
Linear quadratic bumpless transfer A method for bumpless transfer using ideas from LQ theory is presented and shown to reduce to the Hanus conditioning scheme under certain conditions.
GASPAD: A General and Efficient mm-Wave Integrated Circuit Synthesis Method Based on Surrogate Model Assisted Evolutionary Algorithm The design and optimization (both sizing and layout) of mm-wave integrated circuits (ICs) have attracted much attention due to the growing demand in industry. However, available manual design and synthesis methods suffer from a high dependence on design experience, being inefficient or not general enough. To address this problem, a new method, called general mm-wave IC synthesis based on Gaussian process model assisted differential evolution (GASPAD), is proposed in this paper. A medium-scale computationally expensive constrained optimization problem must be solved for the targeted mm-wave IC design problem. Besides the basic techniques of using a global optimization algorithm to obtain highly optimized design solutions and using surrogate models to obtain a high efficiency, a surrogate model-aware search mechanism (SMAS) for tackling the several tens of design variables (medium scale) and a method to appropriately integrate constraint handling techniques into SMAS for tackling the multiple (high-) performance specifications are proposed. Experiments on two 60 GHz power amplifiers in a 65 nm CMOS technology and two mathematical benchmark problems are carried out. Comparisons with the state-of-art provide evidence of the important advantages of GASPAD in terms of solution quality and efficiency.
Safe mutations for deep and recurrent neural networks through output gradients While neuroevolution (evolving neural networks) has been successful across a variety of domains from reinforcement learning, to artificial life, to evolutionary robotics, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights will likely break existing functionality. This paper proposes a solution: a family of safe mutation (SM) operators that facilitate exploration without dramatically altering network behavior or requiring additional interaction with the environment. The most effective SM variant scales the degree of mutation of each individual weight according to the sensitivity of the network's outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks, including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.030339
0.034286
0.034286
0.034286
0.034286
0.028571
0.006456
0.000036
0
0
0
0
0
0
DeepRoad: GAN-based metamorphic testing and input validation framework for autonomous driving systems. While Deep Neural Networks (DNNs) have established the fundamentals of image-based autonomous driving systems, they may exhibit erroneous behaviors and cause fatal accidents. To address the safety issues in autonomous driving systems, a recent set of testing techniques have been designed to automatically generate artificial driving scenes to enrich test suite, e.g., generating new input images transformed from the original ones. However, these techniques are insufficient due to two limitations: first, many such synthetic images often lack diversity of driving scenes, and hence compromise the resulting efficacy and reliability. Second, for machine-learning-based systems, a mismatch between training and application domain can dramatically degrade system accuracy, such that it is necessary to validate inputs for improving system robustness. In this paper, we propose DeepRoad, an unsupervised DNN-based framework for automatically testing the consistency of DNN-based autonomous driving systems and online validation. First, DeepRoad automatically synthesizes large amounts of diverse driving scenes without using image transformation rules (e.g. scale, shear and rotation). In particular, DeepRoad is able to produce driving scenes with various weather conditions (including those with rather extreme conditions) by applying Generative Adversarial Networks (GANs) along with the corresponding real-world weather scenes. Second, DeepRoad utilizes metamorphic testing techniques to check the consistency of such systems using synthetic images. Third, DeepRoad validates input images for DNN-based systems by measuring the distance of the input and training images using their VGGNet features. We implement DeepRoad to test three well-recognized DNN-based autonomous driving systems in Udacity self-driving car challenge. The experimental results demonstrate that DeepRoad can detect thousands of inconsistent behaviors for these systems, and effectively validate input images to potentially enhance the system robustness as well.
Using Ontology-Based Traffic Models for More Efficient Decision Making of Autonomous Vehicles The paper describes how a high-level abstract world model can be used to support the decision-making process of an autonomous driving system. The approach uses a hierarchical world model and distinguishes between a low-level model for the trajectory planning and a high-level model for solving the traffic coordination problem. The abstract world model used in the CyberCars-2 project is presented. It is based on a topological lane segmentation and introduces relations to represent the semantic context of the traffic scenario. This makes it much easier to realize a consistent and complete driving control system, and to analyze, evaluate and simulate such a system.
Ontology-based methods for enhancing autonomous vehicle path planning We report the results of a first implementation demonstrating the use of an ontology to support reasoning about obstacles to improve the capabilities and performance of on-board route planning for autonomous vehicles. This is part of an overall effort to evaluate the performance of ontologies in different components of an autonomous vehicle within the 4D/RCS system architecture developed at NIST. Our initial focus has been on simple roadway driving scenarios where the controlled vehicle encounters potential obstacles in its path. As reported elsewhere [C. Schlenoff, S. Balakirsky, M. Uschold, R. Provine, S. Smith, Using ontologies to aid navigation planning in autonomous vehicles, Knowledge Engineering Review 18 (3) (2004) 243–255], our approach is to develop an ontology of objects in the environment, in conjunction with rules for estimating the damage that would be incurred by collisions with different objects in different situations. Automated reasoning is used to estimate collision damage; this information is fed to the route planner to help it decide whether to plan to avoid the object. We describe the results of the first implementation that integrates the ontology, the reasoner and the planner. We describe our insights and lessons learned and discuss resulting changes to our approach.
Online Verification of Automated Road Vehicles Using Reachability Analysis An approach for formally verifying the safety of automated vehicles is proposed. Due to the uniqueness of each traffic situation, we verify safety online, i.e., during the operation of the vehicle. The verification is performed by predicting the set of all possible occupancies of the automated vehicle and other traffic participants on the road. In order to capture all possible future scenarios, we apply reachability analysis to consider all possible behaviors of mathematical models considering uncertain inputs (e.g., sensor noise, disturbances) and partially unknown initial states. Safety is guaranteed with respect to the modeled uncertainties and behaviors if the occupancy of the automated vehicle does not intersect that of other traffic participants for all times. The applicability of the approach is demonstrated by test drives with an automated vehicle at the Robotics Institute at Carnegie Mellon University.
AVFI: Fault Injection for Autonomous Vehicles Autonomous vehicle (AV) technology is rapidly becoming a reality on U.S. roads, offering the promise of improvements in traffic management, safety, and the comfort and efficiency of vehicular travel. With this increasing popularity and ubiquitous deployment, resilience has become a critical requirement for public acceptance and adoption. Recent studies into the resilience of AVs have shown that though the AV systems are improving over time, they have not reached human levels of automation. Prior work in this area has studied the safety and resilience of individual components of the AV system (e.g., testing of neural networks powering the perception function). However, methods for holistic end-to-end resilience assessment of AV systems are still non-existent.
Automatically testing self-driving cars with search-based procedural content generation Self-driving cars rely on software which needs to be thoroughly tested. Testing self-driving car software in real traffic is not only expensive but also dangerous, and has already caused fatalities. Virtual tests, in which self-driving car software is tested in computer simulations, offer a more efficient and safer alternative compared to naturalistic field operational tests. However, creating suitable test scenarios is laborious and difficult. In this paper we combine procedural content generation, a technique commonly employed in modern video games, and search-based testing, a testing technique proven to be effective in many domains, in order to automatically create challenging virtual scenarios for testing self-driving car soft- ware. Our AsFault prototype implements this approach to generate virtual roads for testing lane keeping, one of the defining features of autonomous driving. Evaluation on two different self-driving car software systems demonstrates that AsFault can generate effective virtual road networks that succeed in revealing software failures, which manifest as cars departing their lane. Compared to random testing AsFault was not only more efficient, but also caused up to twice as many lane departures.
Acclimatizing the Operational Design Domain for Autonomous Driving Systems The operational design domain (ODD) of an automated driving system (ADS) can be used to confine the environmental scope of where the ADS is safe to execute. ODD acclimatization is one of the necessary steps for validating vehicle safety in complex traffic environments. This article proposes an approach and architectural design to extract and enhance the ODD of the ADS based on the task scenario an...
Accelerated Evaluation of Automated Vehicles Safety in Lane-Change Scenarios Based on Importance Sampling Techniques Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs.
A survey of socially interactive robots This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
A General Equilibrium Model for Industries with Price and Service Competition This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers' prices and all retailers' service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty.We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms' service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game.We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service.
Load Scheduling and Dispatch for Aggregators of Plug-In Electric Vehicles This paper proposes an operating framework for aggregators of plug-in electric vehicles (PEVs). First, a minimum-cost load scheduling algorithm is designed, which determines the purchase of energy in the day-ahead market based on the forecast electricity price and PEV power demands. The same algorithm is applicable for negotiating bilateral contracts. Second, a dynamic dispatch algorithm is developed, used for distributing the purchased energy to PEVs on the operating day. Simulation results are used to evaluate the proposed algorithms, and to demonstrate the potential impact of an aggregated PEV fleet on the power system.
An Efficient Non-Negative Matrix-Factorization-Based Approach to Collaborative Filtering for Recommender Systems Matrix-factorization (MF)-based approaches prove to be highly accurate and scalable in addressing collaborative filtering (CF) problems. During the MF process, the non-negativity, which ensures good representativeness of the learnt model, is critically important. However, current non-negative MF (NMF) models are mostly designed for problems in computer vision, while CF problems differ from them due to their extreme sparsity of the target rating-matrix. Currently available NMF-based CF models are based on matrix manipulation and lack practicability for industrial use. In this work, we focus on developing an NMF-based CF model with a single-element-based approach. The idea is to investigate the non-negative update process depending on each involved feature rather than on the whole feature matrices. With the non-negative single-element-based update rules, we subsequently integrate the Tikhonov regularizing terms, and propose the regularized single-element-based NMF (RSNMF) model. RSNMF is especially suitable for solving CF problems subject to the constraint of non-negativity. The experiments on large industrial datasets show high accuracy and low-computational complexity achieved by RSNMF.
Driver Gaze Zone Estimation Using Convolutional Neural Networks: A General Framework and Ablative Analysis Driver gaze has been shown to be an excellent surrogate for driver attention in intelligent vehicles. With the recent surge of highly autonomous vehicles, driver gaze can be useful for determining the handoff time to a human driver. While there has been significant improvement in personalized driver gaze zone estimation systems, a generalized system which is invariant to different subjects, perspe...
Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem Group scheduling problems have attracted much attention owing to their many practical applications. This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time. It is originated from an important industrial process, i.e., wire rod and bar rolling process in steel production systems. Two objecti...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
Tracking Control of Robot Manipulators with Unknown Models: A Jacobian-Matrix-Adaption Method. Tracking control of robot manipulators is a fundamental and significant problem in robotic industry. As a conventional solution, the Jacobian-matrix-pseudo-inverse (JMPI) method suffers from two major limitations: one is the requirement on known information of the robot model such as parameter and structure; the other is the position error accumulation phenomenon caused by the open-loop nature. To...
Model-free motion control of continuum robots based on a zeroing neurodynamic approach As a result of inherent flexibility and structural compliance, continuum robots have great potential in practical applications and are attracting more and more attentions. However, these characteristics make it difficult to acquire the accurate kinematics of continuum robots due to uncertainties, deformation and external loads. This paper introduces a method based on a zeroing neurodynamic approach to solve the trajectory tracking problem of continuum robots. The proposed method can achieve the control of a bellows-driven continuum robot just relying on the actuator input and sensory output information, without knowing any information of the kinematic model. This approach reduces the computational load and can guarantee the real time control. The convergence, stability, and robustness of the proposed approach are proved by theoretical analyses. The effectiveness of the proposed method is verified by simulation studies including tracking performance, comparisons with other three methods, and robustness tests.
Adaptive Discrete ZND Models for Tracking Control of Redundant Manipulator In recent years, many models with high precision for redundant manipulator tracking control have been proposed based on precise kinematics equations. Nevertheless, without precise kinematic equations, developing a model with high precision for tracking control is meaningful. With the help of zeroing neural dynamics (ZND), a continuous ZND model with adaptive Jacobian matrix is obtained. For better computer operation and easier understanding, developing corresponding discrete ZND (DZND) model is also significant. Therefore, two DZND models (termed DZND-I model and DZND-II model) are proposed in this article on the basis of two discretization formulas, respectively. Meanwhile, theoretical analyses are conducted to ensure the efficacy of DZND-I model and DZND-II model. Finally, the efficacy of the two DZND models with adaptive Jacobian matrix is substantiated by experimental results on the basis of the four-link manipulator, UR5 manipulator, and Jaco2 manipulator, respectively.
Kinematic model to control the end-effector of a continuum robot for multi-axis processing This paper presents a novel kinematic approach for controlling the end-effector of a continuum robot for in-situ repair/inspection in restricted and hazardous environments. Forward and inverse kinematic (IK) models have been developed to control the last segment of the continuum robot for performing multi-axis processing tasks using the last six Degrees of Freedom (DoF). The forward kinematics (FK) is proposed using a combination of Euler angle representation and homogeneous matrices. Due to the redundancy of the system, different constraints are proposed to solve the IK for different cases; therefore, the IK model is solved for bending and direction angles between (-pi/2 to + pi/2) radians. In addition, a novel method to calculate the Jacobian matrix is proposed for this type of hyper-redundant kinematics. The error between the results calculated using the proposed Jacobian algorithm and using the partial derivative equations of the FK map (with respect to linear and angular velocity) is evaluated. The error between the two models is found to be insignificant, thus, the Jacobian is validated as a method of calculating the IK for six DoF.
Optimization-Based Inverse Model of Soft Robots With Contact Handling. This letter presents a physically based algorithm to interactively simulate and control the motion of soft robots interacting with their environment. We use the finite-element method to simulate the nonlinear deformation of the soft structure, its actuators, and surroundings and propose a control method relying on a quadratic optimization to find the inverse of the model. The novelty of this work ...
Finite-Time Convergence Adaptive Fuzzy Control for Dual-Arm Robot With Unknown Kinematics and Dynamics Due to strongly coupled nonlinearities of the grasped dual-arm robot and the internal forces generated by grasped objects, the dual-arm robot control with uncertain kinematics and dynamics raises a challenging problem. In this paper, an adaptive fuzzy control scheme is developed for a dual-arm robot, where an approximate Jacobian matrix is applied to address the uncertain kinematic control, while a decentralized fuzzy logic controller is constructed to compensate for uncertain dynamics of the robotic arms and the manipulated object. Also, a novel finite-time convergence parameter adaptation technique is developed for the estimation of kinematic parameters and fuzzy logic weights, such that the estimation can be guaranteed to converge to small neighborhoods around their ideal values in a finite time. Moreover, a partial persistent excitation property of the Gaussian-membership-based fuzzy basis function was established to relax the conventional persistent excitation condition. This enables a designer to reuse these learned weight values in the future without relearning. Extensive simulation studies have been carried out using a dual-arm robot to illustrate the effectiveness of the proposed approach.
Neural-Dynamics-Enabled Jacobian Inversion For Model-Based Kinematic Control Of Multi-Section Continuum Manipulators Continuum manipulators are a new generation of robotic systems that possess infinite number of degrees of freedom associated with inherent compliance, unlike traditional robotic manipulators which consist of a finite number of rigid links. Because of this characteristic, controlling continuum manipulators is more complicated and difficult based on only traditional control theory. Soft computing techniques are solid alternative for improving the control performance of such kinds of robots. In this paper, we employ two types of neural dynamic approaches, i.e., gradient neural dynamics and zeroing neural dynamics, to solve the real-time Jacobian matrix pseudo-inversion problem, thereby achieving model-based kinematic control of multi-section continuum manipulators. Different kinds of neural dynamic models are investigated and their performances in terms of tracking accuracy are shown with and without noise disturbances. Simulation validations with a two-section and a three-section continuum manipulator demonstrate the feasibility and robustness of the proposed models. (C) 2021 Elsevier B.V. All rights reserved.
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Multi-Hop Cooperative Computation Offloading for Industrial IoT–Edge–Cloud Computing Environments The concept of the industrial Internet of things (IIoT) is being widely applied to service provisioning in many domains, including smart healthcare, intelligent transportation, autopilot, and the smart grid. However, because of the IIoT devices’ limited onboard resources, supporting resource-intensive applications, such as 3D sensing, navigation, AI processing, and big-data analytics, remains a challenging task. In this paper, we study the multi-hop computation-offloading problem for the IIoT–edge–cloud computing model and adopt a game-theoretic approach to achieving Quality of service (QoS)-aware computation offloading in a distributed manner. First, we study the computation-offloading and communication-routing problems with the goal of minimizing each task's computation time and energy consumption, formulating the joint problem as a potential game in which the IIoT devices determine their computation-offloading strategies. Second, we apply a free–bound mechanism that can ensure a finite improvement path to a Nash equilibrium. Third, we propose a multi-hop cooperative-messaging mechanism and develop two QoS-aware distributed algorithms that can achieve the Nash equilibrium. Our simulation results show that our algorithms offer a stable performance gain for IIoT in various scenarios and scale well as the device size increases.
Reinforcement learning of motor skills with policy gradients. Autonomous learning is one of the hallmarks of human and animal behavior, and understanding the principles of learning will be crucial in order to achieve true autonomy in advanced machines like humanoid robots. In this paper, we examine learning of complex motor skills with human-like limbs. While supervised learning can offer useful tools for bootstrapping behavior, e.g., by learning from demonstration, it is only reinforcement learning that offers a general approach to the final trial-and-error improvement that is needed by each individual acquiring a skill. Neither neurobiological nor machine learning studies have, so far, offered compelling results on how reinforcement learning can be scaled to the high-dimensional continuous state and action spaces of humans or humanoids. Here, we combine two recent research developments on learning motor control in order to achieve this scaling. First, we interpret the idea of modular motor control by means of motor primitives as a suitable way to generate parameterized control policies for reinforcement learning. Second, we combine motor primitives with the theory of stochastic policy gradient learning, which currently seems to be the only feasible framework for reinforcement learning for humanoids. We evaluate different policy gradient methods with a focus on their applicability to parameterized motor primitives. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.
A Nonconservative LMI Condition for Stability of Switched Systems With Guaranteed Dwell Time. Ensuring stability of switched linear systems with a guaranteed dwell time is an important problem in control systems. Several methods have been proposed in the literature to address this problem, but unfortunately they provide sufficient conditions only. This technical note proposes the use of homogeneous polynomial Lyapunov functions in the non-restrictive case where all the subsystems are Hurwitz, showing that a sufficient condition can be provided in terms of an LMI feasibility test by exploiting a key representation of polynomials. Several properties are proved for this condition, in particular that it is also necessary for a sufficiently large degree of these functions. As a result, the proposed condition provides a sequence of upper bounds of the minimum dwell time that approximate it arbitrarily well. Some examples illustrate the proposed approach.
Survey of Important Issues in UAV Communication Networks Unmanned aerial vehicles (UAVs) have enormous potential in the public and civil domains. These are particularly useful in applications, where human lives would otherwise be endangered. Multi-UAV systems can collaboratively complete missions more efficiently and economically as compared to single UAV systems. However, there are many issues to be resolved before effective use of UAVs can be made to provide stable and reliable context-specific networks. Much of the work carried out in the areas of mobile ad hoc networks (MANETs), and vehicular ad hoc networks (VANETs) does not address the unique characteristics of the UAV networks. UAV networks may vary from slow dynamic to dynamic and have intermittent links and fluid topology. While it is believed that ad hoc mesh network would be most suitable for UAV networks yet the architecture of multi-UAV networks has been an understudied area. Software defined networking (SDN) could facilitate flexible deployment and management of new services and help reduce cost, increase security and availability in networks. Routing demands of UAV networks go beyond the needs of MANETS and VANETS. Protocols are required that would adapt to high mobility, dynamic topology, intermittent links, power constraints, and changing link quality. UAVs may fail and the network may get partitioned making delay and disruption tolerance an important design consideration. Limited life of the node and dynamicity of the network lead to the requirement of seamless handovers, where researchers are looking at the work done in the areas of MANETs and VANETs, but the jury is still out. As energy supply on UAVs is limited, protocols in various layers should contribute toward greening of the network. This paper surveys the work done toward all of these outstanding issues, relating to this new class of networks, so as to spur further research in these areas.
Safe mutations for deep and recurrent neural networks through output gradients While neuroevolution (evolving neural networks) has been successful across a variety of domains from reinforcement learning, to artificial life, to evolutionary robotics, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights will likely break existing functionality. This paper proposes a solution: a family of safe mutation (SM) operators that facilitate exploration without dramatically altering network behavior or requiring additional interaction with the environment. The most effective SM variant scales the degree of mutation of each individual weight according to the sensitivity of the network's outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks, including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution.
Myoelectric or Force Control? A Comparative Study on a Soft Arm Exosuit The intention-detection strategy used to drive an exosuit is fundamental to evaluate the effectiveness and acceptability of the device. Yet, current literature on wearable soft robotics lacks evidence on the comparative performance of different control approaches for online intention-detection. In the present work, we compare two different and complementary controllers on a wearable robotic suit, previously formulated and tested by our group; a model-based myoelectric control ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> ), which estimates the joint torque from the activation of target muscles, and a force control that estimates human torques using an inverse dynamics model ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> ). We test them on a cohort of healthy participants performing tasks replicating functional activities of daily living involving a wide range of dynamic movements. Our results suggest that both controllers are robust and effective in detecting human–motor interaction, and show comparable performance for augmenting muscular activity. In particular, the biceps brachii activity was reduced by up to 74% under the assistance of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> and up to 47% under the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to a no-suit condition. However, the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> outperformed the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> in promptness and assistance during movements that involve high dynamics. The exosuit work normalized with respect to the overall work was <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$68.84 \pm 3.81\%$</tex-math></inline-formula> when it was ran by the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$45.29 \pm 7.71\%$</tex-math></inline-formula> during the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> condition. The reliability and accuracy of motor intention detection strategies in wearable device is paramount for both the efficacy and acceptability of this technology. In this article, we offer a detailed analysis of the two most widely used control approaches, trying to highlight their intrinsic structural differences and to discuss their different and complementary performance.
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
Intelligent Ad-Hoc-On Demand Multipath Distance Vector for Wormhole Attack in Clustered WSN In Wireless Senor Networks, security is the most significant issue when sending such an essential message via wireless connection. This helps attackers to access the network and execute several potential attacks to intercept or modify real data/information. Because network sensors do not have routers, the same routing protocol must be split by all the nodes participating in the network to assist each other with packet transmission. In complex topology, its unguided existence often renders it open to all forms of protection attack, presenting a degree of security difficulties. Wormhole is a popular illustration of attacks, due to its difficulties in detecting and stopping, poses the greatest danger. A new routing technique is being presented in this paper which works towards ensuring the secured path for the data transmission. In the research work wormhole type of attack is being considered and the technique works towards the detection and prevention of the defined type of attack. The proposed methodology is validated based on certain performance related parameters for WSN as energy efficiency, delay from end to end, throughput, delivery ratio of packets. The generated outcomes are compared with some recent techniques over the same domain for the efficiency and the presented work has proven to be best among the described techniques for considered parameters. The methodology defined simulated using NS2 for various performance related parameters like energy efficiency, packet loss, throughput, etc.
M-LionWhale: multi-objective optimisation model for secure routing in mobile ad-hoc network. Mobile ad-hoc network (MANET) is an emerging technology that comes under the category of wireless network. Even though the network assumes that all its mobile nodes are trusted, it is impossible in the real world as few nodes may be malicious. Therefore, it is essential to put forward a mechanism that can provide security by selecting an optimal route for data forwarding. In this study, a goal pro...
MOSOA: A new multi-objective seagull optimization algorithm •A novel Multi-objective Seagull Optimization Algorithm is proposed.•The algorithm is tested on 24 real challenging benchmark test function.•The results show the superior convergence behaviour of proposed algorithm.•The results on engineering design problems prove its efficiency and applicability.
Proactive fault-tolerant wireless mesh networks for mission-critical control systems Although wireless networks are becoming a fundamental infrastructure for various control applications, they are inherently exposed to network faults such as lossy links and node failures in environments such as mining, outdoor monitoring, and chemical process control. In this paper, we propose a proactive fault-tolerant mechanism to protect the wireless network against temporal faults without any explicit network state information for mission-critical control systems. Specifically, the proposed mechanism optimizes the multiple routing paths, link scheduling, and traffic generation rate such that it meets the control stability demands even if it experiences multiple link faults and node faults. The proactive network relies on a constrained optimization problem, where the objective function is the network robustness, and the main constraints are the set of the traffic demand, link, and routing layer requirements. To analyze the robustness, we propose a novel performance metric called stability margin ratio, based on the network performance and the stability boundary. Our numerical and experimental performance evaluation shows that the traffic generation rate and the delay of wireless networks are found as critical as the network reliability to guarantee the stability of control systems. Furthermore, the proposed proactive network provides more robust performance than practical state-of-the-art solutions while maintaining high energy efficiency.
Energy-efficient and balanced routing in low-power wireless sensor networks for data collection Cost-based routing protocols are the main approach used in practical wireless sensor network (WSN) and Internet of Things (IoT) deployments for data collection applications with energy constraints; however, those routing protocols lead to the concentration of most of the data traffic on some specific nodes which provide the best available routes, thus significantly increasing their energy consumption. Consequently, nodes providing the best routes are potentially the first ones to deplete their batteries and stop working. In this paper, we introduce a novel routing strategy for energy efficient and balanced data collection in WSNs/IoT, which can be applied to any cost-based routing solution to exploit suboptimal network routing alternatives based on the parent set concept. While still taking advantage of the stable routing topologies built in cost-based routing protocols, our approach adds a random component into the process of packet forwarding to achieve a better network lifetime in WSNs. We evaluate the implementation of our approach against other state-of-the-art WSN routing protocols through thorough real-world testbed experiments and simulations, and demonstrate that our approach achieves a significant reduction in the energy consumption of the routing layer in the busiest nodes ranging from 11% to 59%, while maintaining over 99% reliability. Furthermore, we conduct the field deployment of our approach in a heterogeneous WSN for environmental monitoring in a forest area, report the experimental results and illustrate the effectiveness of our approach in detail. Our EER based routing protocol CTP+EER is made available as open source to the community for evaluation and adoption.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
JPEG Error Analysis and Its Applications to Digital Image Forensics JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
The FERET Evaluation Methodology for Face-Recognition Algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.
Neural fitted q iteration – first experiences with a data efficient neural reinforcement learning method This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Convert Harm Into Benefit: A Coordination-Learning Based Dynamic Spectrum Anti-Jamming Approach This paper mainly investigates the multi-user anti-jamming spectrum access problem. Using the idea of “converting harm into benefit,” the malicious jamming signals projected by the enemy are utilized by the users as the coordination signals to guide spectrum coordination. An “internal coordination-external confrontation” multi-user anti-jamming access game model is constructed, and the existence of Nash equilibrium (NE) as well as correlated equilibrium (CE) is demonstrated. A coordination-learning based anti-jamming spectrum access algorithm (CLASA) is designed to achieve the CE of the game. Simulation results show the convergence, and effectiveness of the proposed CLASA algorithm, and indicate that our approach can help users confront the malicious jammer, and coordinate internal spectrum access simultaneously without information exchange. Last but not least, the fairness of the proposed approach under different jamming attack patterns is analyzed, which illustrates that this approach provides fair anti-jamming spectrum access opportunities under complicated jamming pattern.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Sliding mode control for uncertain discrete-time systems with Markovian jumping parameters and mixed delays This paper is concerned with the robust sliding mode control (SMC) problem for a class of uncertain discrete-time Markovian jump systems with mixed delays. The mixed delays consist of both the discrete time-varying delays and the infinite distributed delays. The purpose of the addressed problem is to design a sliding mode controller such that, in the simultaneous presence of parameter uncertainties, Markovian jumping parameters and mixed time-delays, the state trajectories are driven onto the pre-defined sliding surface and the resulting sliding mode dynamics is stochastically stable in the mean-square sense. A discrete-time sliding surface is firstly constructed and an SMC law is synthesized to ensure the reaching condition. Moreover, by constructing a new Lyapunov–Krasovskii functional and employing the delay-fractioning approach, a sufficient condition is established to guarantee the stochastic stability of the sliding mode dynamics. Such a condition is characterized in terms of a set of matrix inequalities that can be easily solved by using the semi-definite programming method. A simulation example is given to illustrate the effectiveness and feasibility of the proposed design scheme.
Robust Fault Detection With Missing Measurements This paper investigates the problem of robust fault detection for uncertain systems with missing measurements. The parameter uncertainty is assumed to be of polytopic type, and the measurement missing phenomenon, which appears typically in a network environment, is modelled by a stochastic variable satisfying the Bernoulli random binary distribution. The focus is on the design of a robust fault detection filter, or a residual generation system, which is stochastically stable and satisfies a prescribed disturbance attenuation level. This problem is solved in the parameter-dependent framework, which is much less conservative than the quadratic approach. Both full-order and reduced-order designs are considered, and formulated via linear matrix inequality (LMI) based convex optimization problems, which can be efficiently solved via standard numerical software. A continuous-stirred tank reactor (CSTR) system is utilized to illustrate the design procedures.
Real-time fault diagnosis and fault-tolerant control This "Special Section on Real-Time Fault Diagnosis and Fault-Tolerant Control" of the IEEE Transactions on Industrial Electronics is motivated to provide a forum for academic and industrial communities to report recent theoretic/application results in real-time monitoring, diagnosis, and fault-tolerant design, and exchange the ideas about the emerging research direction in this field. Twenty-three papers were eventually selected through a strict peer-reviewed procedure, which represent the most recent progress on real-time fault diagnosis, fault-tolerant control design, and their applications. Twelve selected papers pay attention on fault diagnosis methods and applications, and the other eleven papers are concentrated on realtime fault-tolerant control and applications. We are going to overview the selected papers following fault diagnosis techniques and fault-tolerant control techniques, sequentially.
Fixed-Structure Lpv Discrete-Time Controller Design With Induced L(2)-Norm And H-2 Performance A new method for the design of fixed-structure dynamic output-feedback linear parameter-varying (LPV) controllers for discrete-time LPV systems with bounded scheduling parameter variations is presented. Sufficient conditions for the stability, H-2 and induced l(2)-norm performance of a given LPV system are represented through a set of linear matrix inequalities (LMIs). These LMIs are used in an iterative algorithm with monotonic convergence for LPV controller design. Extension to the case of uncertain scheduling parameter value is considered as well. Controller parameters appear directly as decision variables in the optimisation program, which enables preserving a desired controller structure in addition to the low order. Efficiency of the proposed method is illustrated on a simulation example, with an iterative convex optimisation scheme used for the improvement of the control system performance.
A parameter set division and switching gain-scheduling controllers design method for time-varying plants. This paper presents a new technique to design switching gain-scheduling controllers for plants with measurable time-varying parameters. By dividing the parameter set into a sufficient number of subsets, and by designing a robust controller to each subset, the designed switching gain-scheduling controllers achieve a desired L2-gain performance for each subset, while ensuring stability whenever a controller switching occurs due to the crossing of the time-varying parameters between any two adjacent subsets. Based on integral quadratic constraints theory and Lyapunov stability theory, a switching gain-scheduling controllers design problem amounts to solving optimization problems. Each optimization problem is to be solved by a combination of the bisection search and the numerical nonsmooth optimization method. The main advantage of the proposed technique is that the division of the parameter region is determined automatically, without any prespecified parameter set division which is required in most of previously developed switching gain-scheduling controllers design methods. A numerical example illustrates the validity of the proposed technique.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Efficient Signature Generation by Smart Cards We present a new public-key signature scheme and a corresponding authentication scheme that are based on discrete logarithms in a subgroup of units in Zp where p is a sufficiently large prime, e.g., p = 2512. A key idea is to use for the base of the discrete logarithm an integer a in Zp such that the order of a is a sufficiently large prime q, e.g., q = 2140. In this way we improve the ElGamal signature scheme in the speed of the procedures for the generation and the verification of signatures and also in the bit length of signatures. We present an efficient algorithm that preprocesses the exponentiation of a random residue modulo p.
Stabilizing a linear system by switching control with dwell time The use of networks in control systems to connect controllers and sensors/actuators has become common practice in many applications. This new technology has also posed a theoretical control problem of how to use the limited data rate of the network effectively. We consider a system where its sensor and actuator are connected by a finite data rate channel. A design method to stabilize a continuous-time, linear plant using a switching controller is proposed. In particular, to prevent the actuator from fast switching, or chattering, which can not only increase the necessary data rate but also damage the system, we employ a dwell-time switching scheme. It is shown that a systematic partition of the state-space enables us to reduce the complexity of the design problem
Effects of robotic knee exoskeleton on human energy expenditure. A number of studies discuss the design and control of various exoskeleton mechanisms, yet relatively few address the effect on the energy expenditure of the user. In this paper, we discuss the effect of a performance augmenting exoskeleton on the metabolic cost of an able-bodied user/pilot during periodic squatting. We investigated whether an exoskeleton device will significantly reduce the metabolic cost and what is the influence of the chosen device control strategy. By measuring oxygen consumption, minute ventilation, heart rate, blood oxygenation, and muscle EMG during 5-min squatting series, at one squat every 2 s, we show the effects of using a prototype robotic knee exoskeleton under three different noninvasive control approaches: gravity compensation approach, position-based approach, and a novel oscillator-based approach. The latter proposes a novel control that ensures synchronization of the device and the user. Statistically significant decrease in physiological responses can be observed when using the robotic knee exoskeleton under gravity compensation and oscillator-based control. On the other hand, the effects of position-based control were not significant in all parameters although all approaches significantly reduced the energy expenditure during squatting.
Internet of Things for Smart Cities The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Highly Anonymous Mobility-Tolerant Location-Based Onion Routing for VANETs. Vehicular ad hoc networks (VANETs) have received considerable attention in recent years. Like any other network, privacy and anonymity are a requirement in VANETs. Classic anonymous routing protocols such as onion routing algorithm are fragile in mobile networks due to frequent link breaks. In this article, we propose a novel onion-based anonymous routing protocol for highly mobile vehicular netwo...
A Survey of Ant Colony Optimization Based Routing Protocols for Mobile Ad Hoc Networks. Developing highly efficient routing protocols for Mobile Ad hoc NETworks (MANETs) is a challenging task. In order to fulfill multiple routing requirements, such as low packet delay, high packet delivery rate, and effective adaptation to network topology changes with low control overhead, and so on, new ways to approximate solutions to the known NP-hard optimization problem of routing in MANETs have to be investigated. Swarm intelligence (SI)-inspired algorithms have attracted a lot of attention, because they can offer possible optimized solutions ensuring high robustness, flexibility, and low cost. Moreover, they can solve large-scale sophisticated problems without a centralized control entity. A successful example in the SI field is the ant colony optimization (ACO) meta-heuristic. It presents a common framework for approximating solutions to NP-hard optimization problems. ACO has been successfully applied to balance the various routing related requirements in dynamic MANETs. This paper presents a comprehensive survey and comparison of various ACO-based routing protocols in MANETs. The main contributions of this survey include: 1) introducing the ACO principles as applied in routing protocols for MANETs; 2) classifying ACO-based routing approaches reviewed in this paper into five main categories; 3) surveying and comparing the selected routing protocols from the perspective of design and simulation parameters; and 4) discussing open issues and future possible design directions of ACO-based routing protocols.
A Microbial Inspired Routing Protocol for VANETs. We present a bio-inspired unicast routing protocol for vehicular ad hoc networks which uses the cellular attractor selection mechanism to select next hops. The proposed unicast routing protocol based on attractor selecting (URAS) is an opportunistic routing protocol, which is able to change itself adaptively to the complex and dynamic environment by routing feedback packets. We further employ a mu...
Improvement of GPSR Protocol in Vehicular Ad Hoc Network. In a vehicular ad hoc network (VANET), vehicles always move in high-speed which may cause the network topology changes frequently. This is challenging for routing protocols of VANET. Greedy Perimeter Stateless Routing (GPSR) is a representative routing protocol of VANET. However, when constructs routing path, GPSR selects the next hop node which is very easily out of the communication range in greedy forwarding, and builds the path with redundancy in perimeter forwarding. To solve the above-mentioned problems, we proposed Maxduration-Minangle GPSR (MM-GPSR) routing protocol in this paper. In greedy forwarding of MM-GPSR, by defining cumulative communication duration to represent the stability of neighbor nodes, the neighbor node with the maximum cumulative communication duration will be selected as the next hop node. In perimeter forwarding of MM-GPSR when greedy forwarding fails, the concept of minimum angle is introduced as the criterion of the optimal next hop node. By taking the position of neighbor nodes into account and calculating angles formed between neighbors and the destination node, the neighbor node with minimum angle will be selected as the next hop node. By using NS-2 and VanetMobiSim, simulations demonstrate that compared with GPSR, MM-GPSR has obvious improvements in reducing the packet loss rate, decreasing the end-to-end delay and increasing the throughput, and is more suitable for VANET.
Delay-Aware Grid-Based Geographic Routing in Urban VANETs: A Backbone Approach Due to the random delay, local maximum and data congestion in vehicular networks, the design of a routing is really a challenging task especially in the urban environment. In this paper, a distributed routing protocol DGGR is proposed, which comprehensively takes into account sparse and dense environments to make routing decisions. As the guidance of routing selection, a road weight evaluation (RWE) algorithm is presented to assess road segments, the novelty of which lies that each road segment is assigned a weight based on two built delay models via exploiting the real-time link property when connected or historic traffic information when disconnected. With the RWE algorithm, the determined routing path can greatly alleviate the risk of local maximum and data congestion. Specially, in view of the large size of a modern city, the road map is divided into a series of Grid Zones (GZs). Based on the position of the destination, the packets can be forwarded among different GZs instead of the whole city map to reduce the computation complexity, where the best path with the lowest delay within each GZ is determined. The backbone link consisting of a series of selected backbone nodes at intersections and within road segments, is built for data forwarding along the determined path, which can further avoid the MAC contentions. Extensive simulations reveal that compared with some classic routing protocols, DGGR performs best in terms of average transmission delay and packet delivery ratio by varying the packet generating speed and density.
A Survey Of Qos-Aware Routing Protocols For The Manet-Wsn Convergence Scenarios In Iot Networks Wireless Sensor Network (WSN) and Mobile Ad hoc Network (MANET) have attracted a special attention because they can serve as communication means in many areas such as healthcare, military, smart traffic and smart cities. Nowadays, as all devices can be connected to a network forming the Internet of Things (IoT), the integration of WSN, MANET and other networks into IoT is indispensable. We investigate the convergence of WSN and MANET in IoT and consider a fundamental problem, that is, how a converged (WSN-MANET) network provides quality of service (QoS) guarantees to rich multimedia applications. This is very important because the network performances of WSN and MANET are quite low, while multimedia applications always require quality of services at certain levels. In this work, we survey the QoS-guaranteed routing protocols for WSN-MANETs, that are proposed in IEEE Xplore Digital Library over the last decade. Then, basing on our findings, we suggest future open research directions.
Efficient and Secure Routing Protocol Based on Artificial Intelligence Algorithms With UAV-Assisted for Vehicular Ad Hoc Networks in Intelligent Transportation Systems Vehicular Ad hoc Networks (VANETs) that are considered as a subset of Mobile Ad hoc Networks (MANETs) can be applied in the field of transportation especially in Intelligent Transportation Systems (ITS). The routing process in these networks is a challenging task due to rapid topology changes, high vehicle mobility and frequent disconnection of links. Therefore, developing an efficient routing pro...
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Energy-Aware Task Offloading and Resource Allocation for Time-Sensitive Services in Mobile Edge Computing Systems Mobile Edge Computing (MEC) is a promising architecture to reduce the energy consumption of mobile devices and provide satisfactory quality-of-service to time-sensitive services. How to jointly optimize task offloading and resource allocation to minimize the energy consumption subject to the latency requirement remains an open problem, which motivates this paper. When the latency constraint is tak...
Symbolic model checking for real-time systems We describe finite-state programs over real-numbered time in a guarded-command language with real-valued clocks or, equivalently, as finite automata with real-valued clocks. Model checking answers the question which states of a real-time program satisfy a branching-time specification (given in an extension of CTL with clock variables). We develop an algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space. For this purpose, we introduce a μ-calculus on computation trees over real-numbered time. Unfortunately, many standard program properties, such as response for all nonzeno execution sequences (during which time diverges), cannot be characterized by fixpoints: we show that the expressiveness of the timed μ-calculus is incomparable to the expressiveness of timed CTL. Fortunately, this result does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values. All timed CTL properties of such programs are shown to be computable as finitely approximable fixpoints in a simple decidable theory.
The industrial indoor channel: large-scale and temporal fading at 900, 2400, and 5200 MHz In this paper, large-scale fading and temporal fading characteristics of the industrial radio channel at 900, 2400, and 5200 MHz are determined. In contrast to measurements performed in houses and in office buildings, few attempts have been made until now to model propagation in industrial environments. In this paper, the industrial environment is categorized into different topographies. Industrial topographies are defined separately for large-scale and temporal fading, and their definition is based upon the specific physical characteristics of the local surroundings affecting both types of fading. Large-scale fading is well expressed by a one-slope path-loss model and excellent agreement with a lognormal distribution is obtained. Temporal fading is found to be Ricean and Ricean K-factors have been determined. Ricean K-factors are found to follow a lognormal distribution.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
Survey of Fog Computing: Fundamental, Network Applications, and Research Challenges. Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog n...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.24
0.24
0.24
0.24
0.24
0.24
0.06
0
0
0
0
0
0
0
The identical operands commutative encryption and watermarking based on homomorphism. Aiming at the requirement of comprehensive security protection for multimedia information, this paper proposes a new algorithm to realize the combination of encryption and watermarking based on the homomorphism. Under the proposed algorithm scheme, the plaintext watermark embedding operations are mapped to the ciphertext domain by homomorphism to achieve the plaintext watermark embedding in the ciphertext domain; at the same time, the embedded plaintext watermarks are also mapped to the ciphertext domain by homomorphism to achieve the ciphertext watermarking embedding. According to the experimental results, by the proposed algorithm, the order of watermark embedding and data encrypting does not affect the production of the same encrypted-watermarked data, meanwhile, whether the encrypted-watermarked data being decrypted or not does not affect the extraction of embedded watermark. For the operands of encryption and watermarking being the same data, the proposed algorithm has higher security compared with the existing mainstream independent operands based communicative encryption and watermarking.
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
Secure and privacy preserving keyword searching for cloud storage services Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.
Integrating Encryption and Marking for Remote Sensing Image Based on Orthogonal Decomposition For the special characters, remote sensing image has higher requirements not only in the security but also in the management; it requires not only the active encryption during storage and transmission for preventing information leakage but also the marking technology to prevent illegal usage as well as copyright protection or even source tracing. Therefore, this paper proposes to integrate encryption and marking technology by the independence and fusion of orthogonal decomposition for the comprehensive security protection of remote sensing image. Under the proposed scheme, encryption and marking technology can achieve the operation independence and content mergence; moreover, there is no special requirement in selecting encryption and marking algorithms. It makes up the shortage of recent integration of encryption and watermarking based on spatial scrambling in applicability and security. According to the experimental results, integration of encryption and marking technology based on orthogonal decomposition satisfies the common constraints of encryption, and marking technology, furthermore, has little impact on remote sensing image data characters and later applications.
Separable reversible data hiding in encrypted images via adaptive embedding strategy with block selection. •An adaptive, separable reversible data hiding scheme in encrypted image is proposed.•Analogues stream-cipher and block permutation are used to encrypt original image.•Classification and selection for encrypted blocks are conducted during embedding.•An accurate prediction strategy was employed to achieve perfect image recovery.•Our scheme has better rate-distortion performance than some state-of-the-art schemes.
Separable reversible data hiding in homomorphic encrypted domain using POB number system In this paper, a novel separable reversible data hiding in homomorphic encrypted images (RDHEI) using POB number system is proposed. The frame of the proposed RDHEI includes three parties: content owner, data hider, and receiver. The content owner divides original image contents into a series of non-overlapping equal-size 2 x 2 blocks, and encrypts all pixels in each block with the same key. The encryption process is carried out in an additive homomorphism manner. The data hider divides the encrypted images into the same size blocks as the encryption phase, and further categories all of the obtained blocks into two sets according to the corresponding block entropy. The embedding processes of the two sets are performed through utilizing permutation ordered binary (POB) number system. For the set with smaller entropies, all pixels in addition to the first pixel in each block are compressed by the POB number system; for the set with larger entropies, only u LSBs of all pixels are compressed in order to vacate room for embedding. The receiver can conduct image decryption, data extraction, and image reconstruction in a separable manner. Experimental results verify the superiority of the proposed method.
Performance enhanced image steganography systems using transforms and optimization techniques Image steganography is the art of hiding highly sensitive information onto the cover image. An ideal approach to image steganography must satisfy two factors: high quality of stego image and high embedding capacity. Conventionally, transform based techniques are widely preferred for these applications. The commonly used transforms for steganography applications are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) etc. In this work, frequency domain transforms such as Fresnelet Transform (FT) and Contourlet Transform (CT) are used for the data hiding process. The secret data is normally hidden in the coefficients of these transforms. However, data hiding in transform coefficients yield less accurate results since the coefficients used for data hiding are selected randomly. Hence, in this work, optimization techniques such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are used for improving the performance of the steganography system. GA and PSO are used to find the best coefficients in order to hide the Quick Response (QR) coded secret data. This approach yields an average PSNR of 52.56 dB and an embedding capacity of 902,136 bits. These experimental results validate the practical feasibility of the proposed methodology for security applications.
A survey on ear biometrics Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non-contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion, earprint forensics, ear symmetry, ear classification, and ear individuality. This article provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers.
A Private and Efficient Mechanism for Data Uploading in Smart Cyber-Physical Systems. To provide fine-grained access to different dimensions of the physical world, the data uploading in smart cyber-physical systems suffers novel challenges on both energy conservation and privacy preservation. It is always critical for participants to consume as little energy as possible for data uploading. However, simply pursuing energy efficiency may lead to extreme disclosure of private informat...
Grey Wolf Optimizer. This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves (Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known test functions, and the results are verified by a comparative study with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming (EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very competitive results compared to these well-known meta-heuristics. The paper also considers solving three classical engineering design problems (tension/compression spring, welded beam, and pressure vessel designs) and presents a real application of the proposed method in the field of optical engineering. The results of the classical engineering design problems and real application prove that the proposed algorithm is applicable to challenging problems with unknown search spaces.
Toward Social Learning Environments We are teaching a new generation of students, cradled in technologies, communication and abundance of information. The implications are that we need to focus the design of learning technologies to support social learning in context. Instead of designing technologies that “teach” the learner, the new social learning technologies will perform three main roles: 1) support the learner in finding the right content (right for the context, for the particular learner, for the specific purpose of the learner, right pedagogically); 2) support learners to connect with the right people (again right for the context, learner, purpose, educational goal etc.), and 3) motivate / incentivize people to learn. In the pursuit of such environments, new areas of sciences become relevant as a source of methods and techniques: social psychology, economic / game theory, multi-agent systems. The paper illustrates how social learning technologies can be designed using some existing and emerging technologies: ontologies vs. social tagging, exploratory search, collaborative vs. self-managed social recommendations, trust and reputation mechanisms, mechanism design and social visualization.
Solving the data sparsity problem in destination prediction Destination prediction is an essential task for many emerging location-based applications such as recommending sightseeing places and targeted advertising according to destinations. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, almost all the existing techniques use various kinds of extra information such as road network, proprietary travel planner, statistics requested from government, and personal driving habits. Such extra information, in most circumstances, is unavailable or very costly to obtain. Thereby we approach the task of destination prediction by using only historical trajectory dataset. However, this approach encounters the \"data sparsity problem\", i.e., the available historical trajectories are far from enough to cover all possible query trajectories, which considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) to address the data sparsity problem. SubSyn first decomposes historical trajectories into sub-trajectories comprising two adjacent locations, and then connects the sub-trajectories into \"synthesised\" trajectories. This process effectively expands the historical trajectory dataset to contain much more trajectories. Experiments based on real datasets show that SubSyn can predict destinations for up to ten times more query trajectories than a baseline prediction algorithm. Furthermore, the running time of the SubSyn-training algorithm is almost negligible for a large set of 1.9 million trajectories, and the SubSyn-prediction algorithm runs over two orders of magnitude faster than the baseline prediction algorithm constantly.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.11
0.1
0.1
0.1
0.1
0.1
0.01
0
0
0
0
0
0
0
A Robust Crowdsourcing-Based Indoor Localization System. WiFi fingerprinting-based indoor localization has been widely used due to its simplicity and can be implemented on the smartphones. The major drawback of WiFi fingerprinting is that the radio map construction is very labor-intensive and time-consuming. Another drawback of WiFi fingerprinting is the Received Signal Strength (RSS) variance problem, caused by environmental changes and device diversity. RSS variance severely degrades the localization accuracy. In this paper, we propose a robust crowdsourcing-based indoor localization system (RCILS). RCILS can automatically construct the radio map using crowdsourcing data collected by smartphones. RCILS abstracts the indoor map as the semantics graph in which the edges are the possible user paths and the vertexes are the location where users may take special activities. RCILS extracts the activity sequence contained in the trajectories by activity detection and pedestrian dead-reckoning. Based on the semantics graph and activity sequence, crowdsourcing trajectories can be located and a radio map is constructed based on the localization results. For the RSS variance problem, RCILS uses the trajectory fingerprint model for indoor localization. During online localization, RCILS obtains an RSS sequence and realizes localization by matching the RSS sequence with the radio map. To evaluate RCILS, we apply RCILS in an office building. Experiment results demonstrate the efficiency and robustness of RCILS.
Device self-calibration in location systems using signal strength histograms Received signal strength RSS fingerprinting is an attractive solution for indoor positioning using Wireless Local Area Network WLAN due to the wide availability of WLAN access points and the ease of monitoring RSS measurements on WLAN-enabled mobile devices. Fingerprinting systems rely on a radiomap collected using a reference device inside the localisation area; however, a major limitation is that the quality of the location information can be degraded if the user carries a different device. This is because diverse devices tend to report the RSS values very differently for a variety of reasons. To ensure compatibility with the existing radiomap, we propose a self-calibration method that attains a good mapping between the reference and user devices using RSS histograms. We do so by relating the RSS histogram of the reference device, which is deduced from the radiomap, and the RSS histogram of the user device, which is updated concurrently with positioning. Unlike other approaches, our calibration method does not require any user intervention, e.g. collecting calibration data using the new device prior to positioning. Experimental results with five smartphones in a real indoor environment demonstrate the effectiveness of the proposed method and indicate that it is more robust to device diversity compared with other calibration methods in the literature.
Advanced real-time indoor tracking based on the Viterbi algorithm and semantic data AbstractA real-time indoor tracking system based on the Viterbi algorithm is developed. This Viterbi principle is used in combination with semantic data to improve the accuracy, that is, the environment of the object that is being tracked and a motion model. The starting point is a fingerprinting technique for which an advanced network planner is used to automatically construct the radio map, avoiding a time consuming measurement campaign. The developed algorithm was verified with simulations and with experiments in a building-wide testbed for sensor experiments, where a median accuracy below 2m was obtained. Compared to a reference algorithm without Viterbi or semantic data, the results indicated a significant improvement: the mean accuracy and standard deviation improved by, respectively, 26.1% and 65.3%. Thereafter a sensitivity analysis was conducted to estimate the influence of node density, grid size, memory usage, and semantic data on the performance.
Magnetic field feature extraction and selection for indoor location estimation. User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.
Indoor smartphone localization via fingerprint crowdsourcing: challenges and approaches. Nowadays, smartphones have become indispensable to everyone, with more and more built-in location-based applications to enrich our daily life. In the last decade, fingerprinting based on RSS has become a research focus in indoor localization, due to its minimum hardware requirement and satisfiable positioning accuracy. However, its time-consuming and labor-intensive site survey is a big hurdle for...
Kernel-Based Positioning in Wireless Local Area Networks The recent proliferation of Location-Based Services (LBSs) has necessitated the development of effective indoor positioning solutions. In such a context, Wireless Local Area Network (WLAN) positioning is a particularly viable solution in terms of hardware and installation costs due to the ubiquity of WLAN infrastructures. This paper examines three aspects of the problem of indoor WLAN positioning using received signal strength (RSS). First, we show that, due to the variability of RSS features over space, a spatially localized positioning method leads to improved positioning results. Second, we explore the problem of access point (AP) selection for positioning and demonstrate the need for further research in this area. Third, we present a kernelized distance calculation algorithm for comparing RSS observations to RSS training records. Experimental results indicate that the proposed system leads to a 17 percent (0.56 m) improvement over the widely used K-nearest neighbor and histogram-based methods.
Attribute-based encryption for fine-grained access control of encrypted data As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE).
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Toward Integrating Vehicular Clouds with IoT for Smart City Services Vehicular ad hoc networks, cloud computing, and the Internet of Things are among the emerging technology enablers offering a wide array of new application possibilities in smart urban spaces. These applications consist of smart building automation systems, healthcare monitoring systems, and intelligent and connected transportation, among others. The integration of IoT-based vehicular technologies will enrich services that are eventually going to ignite the proliferation of exciting and even more advanced technological marvels. However, depending on different requirements and design models for networking and architecture, such integration needs the development of newer communication architectures and frameworks. This work proposes a novel framework for architectural and communication design to effectively integrate vehicular networking clouds with IoT, referred to as VCoT, to materialize new applications that provision various IoT services through vehicular clouds. In this article, we particularly put emphasis on smart city applications deployed, operated, and controlled through LoRaWAN-based vehicular networks. LoraWAN, being a new technology, provides efficient and long-range communication possibilities. The article also discusses possible research issues in such an integration including data aggregation, security, privacy, data quality, and network coverage. These issues must be addressed in order to realize the VCoT paradigm deployment, and to provide insights for investors and key stakeholders in VCoT service provisioning. The article presents deep insights for different real-world application scenarios (i.e., smart homes, intelligent traffic light, and smart city) using VCoT for general control and automation along with their associated challenges. It also presents initial insights, through preliminary results, regarding data and resource management in IoT-based resource constrained environments through vehicular clouds.
Multivariate Short-Term Traffic Flow Forecasting Using Time-Series Analysis Existing time-series models that are used for short-term traffic condition forecasting are mostly univariate in nature. Generally, the extension of existing univariate time-series models to a multivariate regime involves huge computational complexities. A different class of time-series models called structural time-series model (STM) (in its multivariate form) has been introduced in this paper to develop a parsimonious and computationally simple multivariate short-term traffic condition forecasting algorithm. The different components of a time-series data set such as trend, seasonal, cyclical, and calendar variations can separately be modeled in STM methodology. A case study at the Dublin, Ireland, city center with serious traffic congestion is performed to illustrate the forecasting strategy. The results indicate that the proposed forecasting algorithm is an effective approach in predicting real-time traffic flow at multiple junctions within an urban transport network.
Fast identification of the missing tags in a large RFID system. RFID (radio-frequency identification) is an emerging technology with extensive applications such as transportation and logistics, object tracking, and inventory management. How to quickly identify the missing RFID tags and thus their associated objects is a practically important problem in many large-scale RFID systems. This paper presents three novel methods to quickly identify the missing tags in a large-scale RFID system of thousands of tags. Our protocols can reduce the time for identifying all the missing tags by up to 75% in comparison to the state of art.
Passive Image-Splicing Detection by a 2-D Noncausal Markov Model In this paper, a 2-D noncausal Markov model is proposed for passive digital image-splicing detection. Different from the traditional Markov model, the proposed approach models an image as a 2-D noncausal signal and captures the underlying dependencies between the current node and its neighbors. The model parameters are treated as the discriminative features to differentiate the spliced images from the natural ones. We apply the model in the block discrete cosine transformation domain and the discrete Meyer wavelet transform domain, and the cross-domain features are treated as the final discriminative features for classification. The support vector machine which is the most popular classifier used in the image-splicing detection is exploited in our paper for classification. To evaluate the performance of the proposed method, all the experiments are conducted on public image-splicing detection evaluation data sets, and the experimental results have shown that the proposed approach outperforms some state-of-the-art methods.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.072691
0.08
0.08
0.071407
0.069037
0.027506
0.000068
0
0
0
0
0
0
0
Energy-Efficient Directional Charging Strategy for Wireless Rechargeable Sensor Networks Mobile chargers (MCs) equipped with radio-frequency (RF)-based wireless power transfer (WPT) modules have been suggested as a possible solution to battery constraints in wireless rechargeable sensor networks (WRSNs). In RF-based WPT, charging efficiency decreases significantly as the charging distance increases. Therefore, single charging consumes less energy than multicharging because it can generally charge a sensor node at a closer range. However, when the density of nodes is high, multicharging may achieve higher efficiency. We propose an energy-efficient adaptive directional charging (EEADC) algorithm that considers the density of sensor nodes to adaptively choose single charging or multicharging. The EEADC exploits directional antennas to concentrate the energy and improve energy efficiency and identifies the optimum charging points and beam directions to minimize energy consumption. In the EEADC, clustering is performed by considering the density of the sensor nodes. After clustering, the clusters are classified into single-charging/multicharging clusters according to the number of sensor nodes in each cluster. Next, the charging strategy is determined according to the type of cluster. In the case of a multicharging cluster, the problem is nonconvex. Therefore, a discretized charging strategy decision (DCSD) algorithm is proposed. The performance evaluation indicates that EEADC outperforms two existing methods in terms of power consumption and charging delay by 10% and 9%, respectively.
An RFID-Based Closed-Loop Wireless Power Transmission System for Biomedical Applications This brief presents a standalone closed-loop wireless power transmission system that is built around a commercial off-the-shelf (COTS) radio-frequency identification (RFID) reader (TRF7960) operating at 13.56 MHz. It can be used for inductively powering implantable biomedical devices in a closed loop. Any changes in the distance and misalignment between transmitter and receiver coils in near-field wireless power transmission can cause a significant change in the received power, which can cause either a malfunction or excessive heat dissipation. RFID circuits are often used in an open loop. However, their back telemetry capability can be utilized to stabilize the received voltage on the implant. Our measurements showed that the delivered power to the transponder was maintained at 11.2 mW over a range of 0.5 to 2 cm, while the transmitter power consumption changed from 78 mW to 1.1 W. The closed-loop system can also oppose voltage variations as a result of sudden changes in the load current.
ROSE: Robustly Safe Charging for Wireless Power Transfer One critical issue for wireless power transfer is to avoid human health impairments caused by electromagnetic radiation (EMR) exposure. The existing studies mainly focus on scheduling wireless chargers so that (expected) EMR at any point in the area does not exceed a threshold $R_t$<mml:math xmlns:mml=&#34;http://www.w3.org/1998/Math/...
Omnidirectional chargability with directional antennas Wireless Power Transfer (WPT) has received more and more attentions because of its convenience and reliability. In this paper, we first propose the notion of omnidirectional charging by which an area is omnidirectionally charged if a device with directional antennas at any position in the area with any orientation can be charged by directional chargers with power being no smaller than a given threshold. We present our empirical charging model based on field experimental results using off-the-shelf WPT products. Next, we consider the problem of detecting whether the target area achieves omnidirectional charging given a deterministic deployment of chargers. We develop piecewise constant approximation and area discretization techniques to partition the target area into subareas and approximate powers from chargers as constants. Then we propose the Minimum Coverage Set extraction technique which reduces the continuous search space to a discrete one and thereby allows a fast detection algorithm. Moreover, we consider the problem of determining the probability that the target area achieves omnidirectional charging given a random deployment of chargers. We first replace the target area by grid points on triangular lattices to reduce the search space from infinite to finite, then approximate chargers' power with reasonable relaxation, and derive an upper bound of the omnidirectional charging probability. Finally, we conduct both simulation and field experiments, and the results show that our algorithm outperforms comparison algorithms by at least 120%, and the consistency degree of our theoretical results and field experimental results is larger than 93.6%.
Multi-Antenna Wireless Powered Communication With Energy Beamforming The newly emerging wireless powered communication networks (WPCNs) have recently drawn significant attention, where radio signals are used to power wireless terminals for information transmission. In this paper, we study a WPCN where one multi-antenna access point (AP) coordinates energy transfer and information transfer to/from a set of single-antenna users. A harvest-then-transmit protocol is assumed where the AP first broadcasts wireless power to all users via energy beamforming in the downlink (DL), and then, the users send their independent information to the AP simultaneously in the uplink (UL) using their harvested energy. To optimize the users' throughput and yet guarantee their rate fairness, we maximize the minimum throughput among all users by a joint design of the DL-UL time allocation, the DL energy beamforming, and the UL transmit power allocation, as well as receive beamforming. We solve this nonconvex problem optimally by two steps. First, we fix the DL-UL time allocation and obtain the optimal DL energy beamforming, UL power allocation, and receive beamforming to maximize the minimum signal-to-interference-plus-noise ratio of all users. This problem is shown to be still nonconvex; however, we convert it equivalently to a spectral radius minimization problem, which can be solved efficiently by applying the alternating optimization based on the nonnegative matrix theory. Then, the optimal time allocation is found by a one-dimensional search to maximize the minimum rate of all users. Furthermore, two suboptimal designs of lower complexity are also proposed, and their throughput performance is compared against that of the optimal solution.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Efficient Signature Generation by Smart Cards We present a new public-key signature scheme and a corresponding authentication scheme that are based on discrete logarithms in a subgroup of units in Zp where p is a sufficiently large prime, e.g., p = 2512. A key idea is to use for the base of the discrete logarithm an integer a in Zp such that the order of a is a sufficiently large prime q, e.g., q = 2140. In this way we improve the ElGamal signature scheme in the speed of the procedures for the generation and the verification of signatures and also in the bit length of signatures. We present an efficient algorithm that preprocesses the exponentiation of a random residue modulo p.
Stabilizing a linear system by switching control with dwell time The use of networks in control systems to connect controllers and sensors/actuators has become common practice in many applications. This new technology has also posed a theoretical control problem of how to use the limited data rate of the network effectively. We consider a system where its sensor and actuator are connected by a finite data rate channel. A design method to stabilize a continuous-time, linear plant using a switching controller is proposed. In particular, to prevent the actuator from fast switching, or chattering, which can not only increase the necessary data rate but also damage the system, we employ a dwell-time switching scheme. It is shown that a systematic partition of the state-space enables us to reduce the complexity of the design problem
Effects of robotic knee exoskeleton on human energy expenditure. A number of studies discuss the design and control of various exoskeleton mechanisms, yet relatively few address the effect on the energy expenditure of the user. In this paper, we discuss the effect of a performance augmenting exoskeleton on the metabolic cost of an able-bodied user/pilot during periodic squatting. We investigated whether an exoskeleton device will significantly reduce the metabolic cost and what is the influence of the chosen device control strategy. By measuring oxygen consumption, minute ventilation, heart rate, blood oxygenation, and muscle EMG during 5-min squatting series, at one squat every 2 s, we show the effects of using a prototype robotic knee exoskeleton under three different noninvasive control approaches: gravity compensation approach, position-based approach, and a novel oscillator-based approach. The latter proposes a novel control that ensures synchronization of the device and the user. Statistically significant decrease in physiological responses can be observed when using the robotic knee exoskeleton under gravity compensation and oscillator-based control. On the other hand, the effects of position-based control were not significant in all parameters although all approaches significantly reduced the energy expenditure during squatting.
Biologically-inspired soft exosuit. In this paper, we present the design and evaluation of a novel soft cable-driven exosuit that can apply forces to the body to assist walking. Unlike traditional exoskeletons which contain rigid framing elements, the soft exosuit is worn like clothing, yet can generate moments at the ankle and hip with magnitudes of 18% and 30% of those naturally generated by the body during walking, respectively. Our design uses geared motors to pull on Bowden cables connected to the suit near the ankle. The suit has the advantages over a traditional exoskeleton in that the wearer's joints are unconstrained by external rigid structures, and the worn part of the suit is extremely light, which minimizes the suit's unintentional interference with the body's natural biomechanics. However, a soft suit presents challenges related to actuation force transfer and control, since the body is compliant and cannot support large pressures comfortably. We discuss the design of the suit and actuation system, including principles by which soft suits can transfer force to the body effectively and the biological inspiration for the design. For a soft exosuit, an important design parameter is the combined effective stiffness of the suit and its interface to the wearer. We characterize the exosuit's effective stiffness, and present preliminary results from it generating assistive torques to a subject during walking. We envision such an exosuit having broad applicability for assisting healthy individuals as well as those with muscle weakness.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.1
0.033333
0
0
0
0
0
0
0
0
0
Paraphrase Generation with Deep Reinforcement Learning. Automatic generation of paraphrases from a given sentence is an important yet challenging task in natural language processing (NLP), and plays a key role in a number of applications such as question answering, search, and dialogue. In this paper, we present a deep reinforcement learning approach to paraphrase generation. Specifically, we propose a new framework for the task, which consists of a textit{generator} and an textit{evaluator}, both of which are learned from data. The generator, built as a sequence-to-sequence learning model, can produce paraphrases given a sentence. The evaluator, constructed as a deep matching model, can judge whether two sentences are paraphrases of each other. The generator is first trained by deep learning and then further fine-tuned by reinforcement learning in which the reward is given by the evaluator. For the learning of the evaluator, we propose two methods based on supervised learning and inverse reinforcement learning respectively, depending on the type of available training data. Empirical study shows that the learned evaluator can guide the generator to produce more accurate paraphrases. Experimental results demonstrate the proposed models (the generators) outperform the state-of-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.
An intelligent analyzer and understander of English The paper describes a working analysis and generation program for natural language, which handles paragraph length input. Its core is a system of preferential choice between deep semantic patterns, based on what we call “semantic density.” The system is contrasted:with syntax oriented linguistic approaches, and with theorem proving approaches to the understanding problem.
Hitting the right paraphrases in good time We present a random-walk-based approach to learning paraphrases from bilingual parallel corpora. The corpora are represented as a graph in which a node corresponds to a phrase, and an edge exists between two nodes if their corresponding phrases are aligned in a phrase table. We sample random walks to compute the average number of steps it takes to reach a ranking of paraphrases with better ones being "closer" to a phrase of interest. This approach allows "feature" nodes that represent domain knowledge to be built into the graph, and incorporates truncation techniques to prevent the graph from growing too large for efficiency. Current approaches, by contrast, implicitly presuppose the graph to be bipartite, are limited to finding paraphrases that are of length two away from a phrase, and do not generally permit easy incorporation of domain knowledge. Manual evaluation of generated output shows that our approach outperforms the state-of-the-art system of Callison-Burch (2008).
Re-examining machine translation metrics for paraphrase identification We propose to re-examine the hypothesis that automated metrics developed for MT evaluation can prove useful for paraphrase identification in light of the significant work on the development of new MT metrics over the last 4 years. We show that a meta-classifier trained using nothing but recent MT metrics outperforms all previous paraphrase identification approaches on the Microsoft Research Paraphrase corpus. In addition, we apply our system to a second corpus developed for the task of plagiarism detection and obtain extremely positive results. Finally, we conduct extensive error analysis and uncover the top systematic sources of error for a paraphrase identification approach relying solely on MT metrics. We release both the new dataset and the error analysis annotations for use by the community.
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. Current end-to-end machine reading and question answering (Qu0026A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Qu0026A model that does not require recurrent networks: It consists exclusively of attention and convolutions, yet achieves equivalent or better performance than existing models. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. This data augmentation technique not only enhances the training examples but also diversifies the phrasing of the sentences, which results in immediate accuracy improvements. Our single model achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.
Integrating Transformer and Paraphrase Rules for Sentence Simplification. Sentence simplification aims to reduce the complexity of a sentence while retaining its original meaning. Current models for sentence simplification adopted ideas from ma- chine translation studies and implicitly learned simplification mapping rules from normal- simple sentence pairs. In this paper, we explore a novel model based on a multi-layer and multi-head attention architecture and we pro- pose two innovative approaches to integrate the Simple PPDB (A Paraphrase Database for Simplification), an external paraphrase knowledge base for simplification that covers a wide range of real-world simplification rules. The experiments show that the integration provides two major benefits: (1) the integrated model outperforms multiple state- of-the-art baseline models for sentence simplification in the literature (2) through analysis of the rule utilization, the model seeks to select more accurate simplification rules. The code and models used in the paper are available at this https URL Sanqiang/text_simplification.
Universal Adversarial Triggers for Attacking and Analyzing NLP
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices. Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
The exploration/exploitation tradeoff in dynamic cellular genetic algorithms This paper studies static and dynamic decentralized versions of the search model known as cellular genetic algorithm (cGA), in which individuals are located in a specific topology and interact only with their neighbors. Making changes in the shape of such topology or in the neighborhood may give birth to a high number of algorithmic variants. We perform these changes in a methodological way by tuning the concept of ratio. Since the relationship (ratio) between the topology and the neighborhood shape defines the search selection pressure, we propose to analyze in depth the influence of this ratio on the exploration/exploitation tradeoff. As we will see, it is difficult to decide which ratio is best suited for a given problem. Therefore, we introduce a preprogrammed change of this ratio during the evolution as a possible additional improvement that removes the need of specifying a single ratio. A later refinement will lead us to the first adaptive dynamic kind of cellular models to our knowledge. We conclude that these dynamic cGAs have the most desirable behavior among all the evaluated ones in terms of efficiency and accuracy; we validate our results on a set of seven different problems of considerable complexity in order to better sustain our conclusions.
Fast and Fully Automatic Ear Detection Using Cascaded AdaBoost Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 x 10-6) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.
RECIFE-MILP: An Effective MILP-Based Heuristic for the Real-Time Railway Traffic Management Problem The real-time railway traffic management problem consists of selecting appropriate train routes and schedules for minimizing the propagation of delay in case of traffic perturbation. In this paper, we tackle this problem by introducing RECIFE-MILP, a heuristic algorithm based on a mixed-integer linear programming model. RECIFE-MILP uses a model that extends one we previously proposed by including additional elements characterizing railway reality. In addition, it implements performance boosting methods selected among several ones through an algorithm configuration tool. We present a thorough experimental analysis that shows that the performances of RECIFE-MILP are better than the ones of the currently implemented traffic management strategy. RECIFE-MILP often finds the optimal solution to instances within the short computation time available in real-time applications. Moreover, RECIFE-MILP is robust to its configuration if an appropriate selection of the combination of boosting methods is performed.
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey. This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.1055
0.1
0.1
0.1
0.1
0.1
0.035
0.000158
0
0
0
0
0
0
All-domain Spectrum Command and Control via Hierarchical Dynamic Spectrum Sharing with Implemented Dynamic Spectrum Access Toolchain The proliferation of spectrum-dependent systems and the reduction in Federally-owned spectrum has challenged Radio Access Networks (RANs) to keep pace with requirements for increased data demands. Particularly, Department of Defense (DoD) bandwidth-intensive applications, such as the Internet of Military Things (IoMT), Command and Control (C2), and decentralized or distributed networks all share t...
Stochastic Power Adaptation with Multiagent Reinforcement Learning for Cognitive Wireless Mesh Networks As the scarce spectrum resource is becoming overcrowded, cognitive radio indicates great flexibility to improve the spectrum efficiency by opportunistically accessing the authorized frequency bands. One of the critical challenges for operating such radios in a network is how to efficiently allocate transmission powers and frequency resource among the secondary users (SUs) while satisfying the quality-of-service constraints of the primary users. In this paper, we focus on the noncooperative power allocation problem in cognitive wireless mesh networks formed by a number of clusters with the consideration of energy efficiency. Due to the SUs' dynamic and spontaneous properties, the problem is modeled as a stochastic learning process. We first extend the single-agent $(Q)$-learning to a multiuser context, and then propose a conjecture-based multiagent $(Q)$-learning algorithm to achieve the optimal transmission strategies with only private and incomplete information. An intelligent SU performs $(Q)$-function updates based on the conjecture over the other SUs' stochastic behaviors. This learning algorithm provably converges given certain restrictions that arise during the learning procedure. Simulation experiments are used to verify the performance of our algorithm and demonstrate its effectiveness of improving the energy efficiency.
Applications of Economic and Pricing Models for Resource Management in 5G Wireless Networks: A Survey. This paper presents a comprehensive literature review on applications of economic and pricing theory for resource management in the evolving fifth generation (5G) wireless networks. The 5G wireless networks are envisioned to overcome existing limitations of cellular networks in terms of data rate, capacity, latency, energy efficiency, spectrum efficiency, coverage, reliability, and cost per information transfer. To achieve the goals, the 5G systems will adopt emerging technologies such as massive multiple-input multiple-output, mmWave communications, and dense heterogeneous networks. However, 5G involves multiple entities and stakeholders that may have different objectives, e.g., high data rate, low latency, utility maximization, and revenue/profit maximization. This poses a number of challenges to resource management designs of 5G. While the traditional solutions may neither efficient nor applicable, economic and pricing models have been recently developed and adopted as useful tools to achieve the objectives. In this paper, we review economic and pricing approaches proposed to address resource management issues in the 5G wireless networks including user association, spectrum allocation, and interference and power management. Furthermore, we present applications of economic and pricing models for wireless caching and mobile data offloading. Finally, we highlight important challenges, open issues and future research directions of applying economic and pricing models to the 5G wireless networks.
Effective Capacity in Wireless Networks: A Comprehensive Survey. Low latency applications, such as multimedia communications, autonomous vehicles, and Tactile Internet are the emerging applications for next-generation wireless networks, such as 5th generation (5G) mobile networks. Existing physical-layer channel models, however, do not explicitly consider quality-of-service (QoS) aware related parameters under specific delay constraints. To investigate the performance of low-latency applications in future networks, a new mathematical framework is needed. Effective capacity (EC), which is a link-layer channel model with QoS-awareness, can be used to investigate the performance of wireless networks under certain statistical delay constraints. In this paper, we provide a comprehensive survey on existing works, that use the EC model in various wireless networks. We summarize the work related to EC for different networks, such as cognitive radio networks (CRNs), cellular networks, relay networks, adhoc networks, and mesh networks. We explore five case studies encompassing EC operation with different design and architectural requirements. We survey various delay-sensitive applications, such as voice and video with their EC analysis under certain delay constraints. We finally present the future research directions with open issues covering EC maximization.
A hierarchical learning approach to anti-jamming channel selection strategies. This paper investigates the channel selection problem for anti-jamming defense in an adversarial environment. In our work, we simultaneously consider malicious jamming and co-channel interference among users, and formulate this anti-jamming defense problem as a Stackelberg game with one leader and multiple followers. Specifically, the users and jammer independently and selfishly select their respective optimal strategies and obtain the optimal channels based on their own utilities. To derive the Stackelberg Equilibrium, a hierarchical learning framework is formulated, and a hierarchical learning algorithm (HLA) is proposed. In addition, the convergence performance of the proposed HLA algorithm is analyzed. Finally, we present simulation results to validate the effectiveness of the proposed algorithm.
Pricing-Based Channel Selection for D2D Content Sharing in Dynamic Environment In order to make device-to-device (D2D) content sharing give full play to its advantage of improving local area services, one of the important issues is to decide the channels that D2D pairs occupy. Most existing works study this issue in static environment, and ignore the guidance for D2D pairs to select the channel adaptively. In this paper, we investigate this issue in dynamic environment where...
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
Very Deep Convolutional Networks for Large-Scale Image Recognition. In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
Chimp optimization algorithm. •A novel optimizer called Chimp Optimization Algorithm (ChOA) is proposed.•ChOA is inspired by individual intelligence and sexual motivation of chimps.•ChOA alleviates the problems of slow convergence rate and trapping in local optima.•The four main steps of Chimp hunting are implemented.
Space-time modeling of traffic flow. This paper discusses the application of space-time autoregressive integrated moving average (STARIMA) methodology for representing traffic flow patterns. Traffic flow data are in the form of spatial time series and are collected at specific locations at constant intervals of time. Important spatial characteristics of the space-time process are incorporated in the STARIMA model through the use of weighting matrices estimated on the basis of the distances among the various locations where data are collected. These matrices distinguish the space-time approach from the vector autoregressive moving average (VARMA) methodology and enable the model builders to control the number of the parameters that have to be estimated. The proposed models can be used for short-term forecasting of space-time stationary traffic-flow processes and for assessing the impact of traffic-flow changes on other parts of the network. The three-stage iterative space-time model building procedure is illustrated using 7.5min average traffic flow data for a set of 25 loop-detectors located at roads that direct to the centre of the city of Athens, Greece. Data for two months with different traffic-flow characteristics are modelled in order to determine the stability of the parameter estimation.
A simplified dual neural network for quadratic programming with its KWTA application. The design, analysis, and application of a new recurrent neural network for quadratic programming, called simplified dual neural network, are discussed. The analysis mainly concentrates on the convergence property and the computational complexity of the neural network. The simplified dual neural network is shown to be globally convergent to the exact optimal solution. The complexity of the neural network architecture is reduced with the number of neurons equal to the number of inequality constraints. Its application to k-winners-take-all (KWTA) operation is discussed to demonstrate how to solve problems with this neural network.
G2-type SRMPC scheme for synchronous manipulation of two redundant robot arms. In this paper, to remedy the joint-angle drift phenomenon for manipulation of two redundant robot arms, a novel scheme for simultaneous repetitive motion planning and control (SRMPC) at the joint-acceleration level is proposed, which consists of two subschemes. To do so, the performance index of each SRMPC subscheme is derived and designed by employing the gradient dynamics twice, of which a convergence theorem and its proof are presented. In addition, for improving the accuracy of the motion planning and control, position error, and velocity, error feedbacks are incorporated into the forward kinematics equation and analyzed via Zhang neural-dynamics method. Then the two subschemes are simultaneously reformulated as two quadratic programs (QPs), which are finally unified into one QP problem. Furthermore, a piecewise-linear projection equation-based neural network (PLPENN) is used to solve the unified QP problem, which can handle the strictly convex QP problem in an inverse-free manner. More importantly, via such a unified QP formulation and the corresponding PLPENN solver, the synchronism of two redundant robot arms is guaranteed. Finally, two given tasks are fulfilled by 2 three-link and 2 five-link planar robot arms, respectively. Computer-simulation results validate the efficacy and accuracy of the SRMPC scheme and the corresponding PLPENN solver for synchronous manipulation of two redundant robot arms.
Distributed Kalman consensus filter with event-triggered communication: Formulation and stability analysis. •The problem of distributed state estimation in sensor networks with event-triggered communication schedules on both sensor-to-estimator channel and estimator-to-estimator channel is studied.•An event-triggered KCF is designed by deriving the optimal Kalman gain matrix which minimizes the mean squared error.•A computational scalable form of the proposed filter is presented by some approximations.•An appropriate choice of the consensus gain matrix is provided to ensure the stochastic stability of the proposed filter.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Can Dynamic TDD Enabled Half-Duplex Cell-Free Massive MIMO Outperform Full-Duplex Cellular Massive MIMO? We consider a dynamic time division duplex (DTDD) enabled cell-free massive multiple-input multiple-output (CF-mMIMO) system, where each half-duplex (HD) access point (AP) is scheduled to operate in the uplink (UL) or downlink (DL) mode based on the data demands of the user equipments (UEs), with the goal of maximizing the sum UL-DL spectral efficiency (SE). We develop a new, low complexity, greedy algorithm for the combinatorial AP scheduling problem, with an optimality guarantee theoretically established via showing that a lower bound of the sum UL-DL SE is sub-modular. We also consider pilot sequence reuse among the UEs to limit the channel estimation overhead. In CF systems, all the APs estimate the channel from every UE, making pilot allocation problem different from the cellular case. We develop a novel algorithm that iteratively minimizes the maximum pilot contamination across the UEs. We compare the performance of our solutions, both theoretically and via simulations, against a full duplex (FD) multi-cell mMIMO system. Our results show that, due to the joint processing of the signals at the central processing unit, CF-mMIMO with dynamic HD AP-scheduling significantly outperforms cellular FD-mMIMO in terms of the sum SE and 90% likely SE. Thus, DTDD enabled HD CF-mMIMO is a promising alternative to cellular FD-mMIMO, without the cost of hardware for self-interference suppression.
Max-Min Power Control in Downlink Massive MIMO With Distributed Antenna Arrays In this paper, we investigate optimal downlink power allocation in massive multiple-input multiple-output (MIMO) networks with distributed antenna arrays (DAAs) under correlated and uncorrelated channel fading. In DAA massive MIMO, a base station (BS) consists of multiple antenna sub-arrays. Notably, the antenna sub-arrays are deployed in arbitrary locations within a DAA massive MIMO cell. Consequently, the distance-dependent large-scale propagation coefficients are different from a user to these different antenna sub-arrays, which makes power control a challenging problem. We assume that the network operates in time-division duplex mode, where each BS obtains the channel estimates via uplink pilots. Based on the channel estimates, the BSs perform maximum-ratio transmission in the downlink. We then derive a closed-form signal-to-interference-plus-noise ratio (SINR) expression, where the channels are subject to correlated fading. Based on the SINR expression, we propose a network-wide max-min power control algorithm to ensure that each user in the network receives a uniform quality of service. Numerical results demonstrate the performance advantages offered by DAA massive MIMO. For some specific scenarios, DAA massive MIMO can improve the average per-user throughput up to 55%. Furthermore, we demonstrate that channel fading covariance is an important factor in determining the performance of DAA massive MIMO.
Comparison of Orthogonal vs. Union of Subspace Based Pilots for Multi-Cell Massive MIMO Systems In this paper, we analytically compare orthogonal pilot reuse (OPR) with union of subspace based pilots in terms of channel estimation error and achievable throughput. In OPR, due to the repetition of the same pilot sequences across all cells, inter-cell interference (ICI) leads to pilot contamination, which can severely degrade the performance of cell-edge users. In our proposed union of subspace based method of pilot sequence design, pilots of adjacent cells belong to distinct sets of orthonormal bases. Therefore, each user experiences a lower level of ICI, but from all users of neighboring cells. However, when the pilots are chosen from mutually unbiased orthonormal bases (MUOB), the ICI power scales down exactly as the inverse of the pilot length, leading to low ICI. Further, as the number of users increases, it may no longer be feasible to allot orthogonal pilots to all users within a cell. We find that, with limited number of pilot sequences, MUOB is significantly more resilient to intra-cell interference, yielding better channel estimates compared to OPR. On the other hand, when the pilot length is larger than the number of users, while OPR achieves channel estimates with very high accuracy for some of the users, MUOB is able to provide a more uniform quality of channel estimation across all users in the cell. We evaluate the fairness of OPR vis-à-vis MUOB using the Jain's fairness metric and max-min index. Via numerical simulations, we observe that the average fairness as well as convergence rates of utility metrics measured using MUOB pilots outperform the conventional OPR scheme.
The Road to 6G: Ten Physical Layer Challenges for Communications Engineers AbstractWhile the deployment of 5G cellular systems will continue well into the next decade, much interest is already being generated toward technologies that will underlie its successor, 6G. Undeniably, 5G will have a transformative impact on the way we live and communicate, but it is still far away from supporting the Internet of Everything, where upward of 1 million devices/km3 (both terrestrial and aerial) will require ubiquitous, reliable, low-latency connectivity. This article looks at some of the fundamental problems that pertain to key physical layer enablers for 6G. This includes highlighting challenges related to intelligent reflecting surfaces, cell-free massive MIMO, and THz communications. Our analysis covers theoretical modeling challenges, hardware implementation issues, and scalability, among others. The article concludes by delineating the critical role of signal processing in the new era of wireless communications.
Cell-Free Massive MIMO versus Small Cells. A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs), which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max–min power control algorithms. Max–min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard. Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly fivefold improvement in 95%-likely per-user throughput over the small-cell scheme, and tenfold improvement when shadow fading is correlated.
Fuzzy logic in control systems: fuzzy logic controller. I.
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
New approach using ant colony optimization with ant set partition for fuzzy control design applied to the ball and beam system. In this paper we describe the design of a fuzzy logic controller for the ball and beam system using a modified Ant Colony Optimization (ACO) method for optimizing the type of membership functions, the parameters of the membership functions and the fuzzy rules. This is achieved by applying a systematic and hierarchical optimization approach modifying the conventional ACO algorithm using an ant set partition strategy. The simulation results show that the proposed algorithm achieves better results than the classical ACO algorithm for the design of the fuzzy controller.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Noninterference for a Practical DIFC-Based Operating System The Flume system is an implementation of decentralized information flow control (DIFC) at the operating system level. Prior work has shown Flume can be implemented as a practical extension to the Linux operating system, allowing real Web applications to achieve useful security guarantees. However, the question remains if the Flume system is actually secure. This paper compares Flume with other recent DIFC systems like Asbestos, arguing that the latter is inherently susceptible to certain wide-bandwidth covert channels, and proving their absence in Flume by means of a noninterference proof in the communicating sequential processes formalism.
Efficient and reliable low-power backscatter networks There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors. This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.
Magnetic, Acceleration Fields and Gyroscope Quaternion (MAGYQ)-based attitude estimation with smartphone sensors for indoor pedestrian navigation. The dependence of proposed pedestrian navigation solutions on a dedicated infrastructure is a limiting factor to the deployment of location based services. Consequently self-contained Pedestrian Dead-Reckoning (PDR) approaches are gaining interest for autonomous navigation. Even if the quality of low cost inertial sensors and magnetometers has strongly improved, processing noisy sensor signals combined with high hand dynamics remains a challenge. Estimating accurate attitude angles for achieving long term positioning accuracy is targeted in this work. A new Magnetic, Acceleration fields and GYroscope Quaternion (MAGYQ)-based attitude angles estimation filter is proposed and demonstrated with handheld sensors. It benefits from a gyroscope signal modelling in the quaternion set and two new opportunistic updates: magnetic angular rate update (MARU) and acceleration gradient update (AGU). MAGYQ filter performances are assessed indoors, outdoors, with dynamic and static motion conditions. The heading error, using only the inertial solution, is found to be less than 10 degrees after 1.5 km walking. The performance is also evaluated in the positioning domain with trajectories computed following a PDR strategy.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.2
0.2
0.2
0.1
0.011111
0
0
0
0
0
0
0
0
0
TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation. We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers.
IncSQL: Training Incremental Text-to-SQL Parsers with Non-Deterministic Oracles. We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the execution-guided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%.
Semantic Parsing for Task Oriented Dialog using Hierarchical Representations. Task oriented dialog systems typically first parse user utterances to semantic frames comprised of intents and slots. Previous work on task oriented intent and slot-filling work has been restricted to one intent per query and one slot label per token, and thus cannot model complex compositional requests. Alternative semantic parsing systems have represented queries as logical forms, but these are challenging to annotate and parse. We propose a hierarchical annotation scheme for semantic parsing that allows the representation of compositional queries, and can be efficiently and accurately parsed by standard constituency parsing models. We release a dataset of 44k annotated queries (fb.me/semanticparsingdialog), and show that parsing models outperform sequence-to-sequence approaches on this dataset.
Recall-Oriented Evaluation for Information Retrieval Systems. In a recall context, the user is interested in retrieving all relevant documents rather than retrieving a few that are at the top of the results list. In this article we propose ROM (Recall Oriented Measure) which takes into account the main elements that should be considered in evaluating information retrieval systems while ordering them in a way explicitly adapted to a recall context.
Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task. We present Spider, a large-scale, complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 college students. It consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables, covering 138 different domains. We define a new complex and cross-domain semantic parsing and text-to-SQL task where different complex SQL queries and databases appear in train and test sets. In this way, the task requires the model to generalize well to both new SQL queries and new database schemas. Spider is distinct from most of the previous semantic parsing tasks because they all use a single database and the exact same programs in the train set and the test set. We experiment with various state-of-the-art models and the best model achieves only 12.4% exact matching accuracy on a database split setting. This shows that Spider presents a strong challenge for future research. Our dataset and task are publicly available at this https URL
SQLNet: Generating Structured Queries From Natural Language Without Reinforcement Learning. Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequence-to-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the order-matters problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph, so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9% to 13% on the WikiSQL task.
One billion word benchmark for measuring progress in statistical language modeling. We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned Kneser-Ney 5-gram model achieves perplexity 67.6; a combination of techniques leads to 35% reduction in perplexity, or 10% reduction in cross-entropy (bits), over that baseline. The benchmark is available as a code.google.com project; besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users...
Distributed multirobot localization In this paper, we present a new approach to the problem of simultaneously localizing a group of mobile robots capable of sensing one another. Each of the robots collects sensor data regarding its own motion and shares this information with the rest of the team during the update cycles. A single estimator, in the form of a Kalman filter, processes the available positioning information from all the members of the team and produces a pose estimate for every one of them. The equations for this centralized estimator can be written in a decentralized form, therefore allowing this single Kalman filter to be decomposed into a number of smaller communicating filters. Each of these filters processes the sensor data collected by its host robot. Exchange of information between the individual filters is necessary only when two robots detect each other and measure their relative pose. The resulting decentralized estimation schema, which we call collective localization, constitutes a unique means for fusing measurements collected from a variety of sensors with minimal communication and processing requirements. The distributed localization algorithm is applied to a group of three robots and the improvement in localization accuracy is presented. Finally, a comparison to the equivalent decentralized information filter is provided.
A simplified dual neural network for quadratic programming with its KWTA application. The design, analysis, and application of a new recurrent neural network for quadratic programming, called simplified dual neural network, are discussed. The analysis mainly concentrates on the convergence property and the computational complexity of the neural network. The simplified dual neural network is shown to be globally convergent to the exact optimal solution. The complexity of the neural network architecture is reduced with the number of neurons equal to the number of inequality constraints. Its application to k-winners-take-all (KWTA) operation is discussed to demonstrate how to solve problems with this neural network.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.11
0.11
0.11
0.1
0.1
0.039143
0.000765
0
0
0
0
0
0
0
Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power Experimental analysis of the performance of a proposed method is a crucial and necessary task in an investigation. In this paper, we focus on the use of nonparametric statistical inference for analyzing the results obtained in an experiment design in the field of computational intelligence. We present a case study which involves a set of techniques in classification tasks and we study a set of nonparametric procedures useful to analyze the behavior of a method with respect to a set of algorithms, such as the framework in which a new proposal is developed. Particularly, we discuss some basic and advanced nonparametric approaches which improve the results offered by the Friedman test in some circumstances. A set of post hoc procedures for multiple comparisons is presented together with the computation of adjusted p-values. We also perform an experimental analysis for comparing their power, with the objective of detecting the advantages and disadvantages of the statistical tests described. We found that some aspects such as the number of algorithms, number of data sets and differences in performance offered by the control method are very influential in the statistical tests studied. Our final goal is to offer a complete guideline for the use of nonparametric statistical procedures for performing multiple comparisons in experimental studies.
Financial time series prediction using a dendritic neuron model. As a complicated dynamic system, financial time series calls for an appropriate forecasting model. In this study, we propose a neuron model based on dendritic mechanisms and a phase space reconstruction (PSR) to analyze the Shanghai Stock Exchange Composite Index, Deutscher Aktienindex, N225, and DJI Average. The PSR allows us to reconstruct the financial time series, so we can prove that attractors exist for the systems constructed. Thus, the attractors obtained can be observed intuitively in a three-dimensional search space, thereby allowing us to analyze the characteristics of dynamic systems. In addition, using the reconstructed phase space, we confirmed the chaotic properties and the reciprocal to determine the limit of prediction through the maximum Lyapunov exponent. We also made short-term predictions based on the nonlinear approximating dendritic neuron model, where the experimental results showed that the proposed methodology which hybridizes PSR and the dendritic model performed better than traditional multi-layered perceptron, the Elman neural network, the single multiplicative neuron model and the neuro-fuzzy inference system in terms of prediction accuracy and training time. Hopefully, this hybrid technology is capable to advance the research for financial time series and provide an effective solution to risk management.
Recent Advances in Evolutionary Computation Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of “biological evolution” toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily “evolutionary”. This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.
A Multi-Layered Immune System For Graph Planarization Problem This paper presents a new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems. The proposed methodology is composed of five layers. After expressing the problem as, a suitable representation in the first layer, the search space and the features of the problem are estimated and extracted in the second and third layers, respectively. Through taking advantage of the minimized search space from estimation and the heuristic information from extraction, the antibodies (or solutions) are evolved in the fourth layer and finally the fittest antibody is exported. In order to demonstrate the efficiency of the proposed system, the graph planarization problem is tested. Simulation results based on several benchmark instances show that the proposed algorithm performs better than traditional algorithms.
An approximate logic neuron model with a dendritic structure. An approximate logic neuron model (ALNM) based on the interaction of dendrites and the dendritic plasticity mechanism is proposed. The model consists of four layers: a synaptic layer, a dendritic layer, a membrane layer, and a soma body. ALNM has a neuronal-pruning function to form its unique dendritic topology for a particular task, through screening out useless synapses and unnecessary dendrites during training. In addition, corresponding to the mature dendritic morphology, the trained ALNM can be substituted by a logic circuit, using the logic NOT, AND and OR operations, which possesses powerful operation capacities and can be simply implemented in hardware. Since the ALNM is a feed-forward model, an error back-propagation algorithm is used to train it. To verify the effectiveness of the proposed model, we apply the model to the Iris, Glass and Cancer datasets. The results of the classification accuracy rate and convergence speed are analyzed, discussed, and compared with a standard back-propagation neural network. Simulation results show that ALNM can be used as an effective pattern classification method. It reduces the size of the dataset features by learning, without losing any essential information. The interaction between features can also be observed in the dendritic morphology. Simultaneously, the logic circuit can be used as a single classifier to deal with big data accurately and efficiently. (C) 2015 Elsevier B.V. All rights reserved.
Metaheuristics: A Comprehensive Overview And Classification Along With Bibliometric Analysis Research in metaheuristics for global optimization problems are currently experiencing an overload of wide range of available metaheuristic-based solution approaches. Since the commencement of the first set of classical metaheuristic algorithms namely genetic, particle swarm optimization, ant colony optimization, simulated annealing and tabu search in the early 70s to late 90s, several new advancements have been recorded with an exponential growth in the novel proposals of new generation metaheuristic algorithms. Because these algorithms are neither entirely judged based on their performance values nor according to the useful insight they may provide, but rather the attention is given to the novelty of the processes they purportedly models, these area of study will continue to periodically see the arrival of several new similar techniques in the future. However, there is an obvious reason to keep track of the progressions of these algorithms by collating their general algorithmic profiles in terms of design inspirational source, classification based on swarm or evolutionary search concept, existing variation from the original design, and application areas. In this paper, we present a relatively new taxonomic classification list of both classical and new generation sets of metaheuristic algorithms available in the literature, with the aim of providing an easily accessible collection of popular optimization tools for the global optimization research community who are at the forefront in utilizing these tools for solving complex and difficult real-world problems. Furthermore, we also examined the bibliometric analysis of this field of metaheuristic for the last 30 years.
Dendritic Neuron Model With Effective Learning Algorithms for Classification, Approximation, and Prediction. An artificial neural network (ANN) that mimics the information processing mechanisms and procedures of neurons in human brains has achieved a great success in many fields, e.g., classification, prediction, and control. However, traditional ANNs suffer from many problems, such as the hard understanding problem, the slow and difficult training problems, and the difficulty to scale them up. These problems motivate us to develop a new dendritic neuron model (DNM) by considering the nonlinearity of synapses, not only for a better understanding of a biological neuronal system, but also for providing a more useful method for solving practical problems. To achieve its better performance for solving problems, six learning algorithms including biogeography-based optimization, particle swarm optimization, genetic algorithm, ant colony optimization, evolutionary strategy, and population-based incremental learning are for the first time used to train it. The best combination of its user-defined parameters has been systemically investigated by using the Taguchi's experimental design method. The experiments on 14 different problems involving classification, approximation, and prediction are conducted by using a multilayer perceptron and the proposed DNM. The results suggest that the proposed learning algorithms are effective and promising for training DNM and thus make DNM more powerful in solving classification, approximation, and prediction problems.
Empirical Modelling of Genetic Algorithms This paper addresses the problem of reliably setting genetic algorithm parameters for consistent labelling problems. Genetic algorithm parameters are notoriously difficult to determine. This paper proposes a robust empirical framework, based on the analysis of factorial experiments. The use of a graeco-latin square permits an initial study of a wide range of parameter settings. This is followed by fully crossed factorial experiments with narrower ranges, which allow detailed analysis by logistic regression. The empirical models derived can be used to determine optimal algorithm parameters and to shed light on interactions between the parameters and their relative importance. Re-fined models are produced, which are shown to be robust under extrapolation to up to triple the problem size.
Global finite-time stabilization of a class of uncertain nonlinear systems This paper studies the problem of finite-time stabilization for nonlinear systems. We prove that global finite-time stabilizability of uncertain nonlinear systems that are dominated by a lower-triangular system can be achieved by Holder continuous state feedback. The proof is based on the finite-time Lyapunov stability theorem and the nonsmooth feedback design method developed recently for the control of inherently nonlinear systems that cannot be dealt with by any smooth feedback. A recursive design algorithm is developed for the construction of a Holder continuous, global finite-time stabilizer as well as a C^1 positive definite and proper Lyapunov function that guarantees finite-time stability.
Dyme: Dynamic Microservice Scheduling in Edge Computing Enabled IoT In recent years, the rapid development of mobile edge computing (MEC) provides an efficient execution platform at the edge for Internet-of-Things (IoT) applications. Nevertheless, the MEC also provides optimal resources to different microservices, however, underlying network conditions and infrastructures inherently affect the execution process in MEC. Therefore, in the presence of varying network conditions, it is necessary to optimally execute the available task of end users while maximizing the energy efficiency in edge platform and we also need to provide fair Quality-of-Service (QoS). On the other hand, it is necessary to schedule the microservices dynamically to minimize the total network delay and network price. Thus, in this article, unlike most of the existing works, we propose a dynamic microservice scheduling scheme for MEC. We design the microservice scheduling framework mathematically and also discuss the computational complexity of the scheduling algorithm. Extensive simulation results show that the microservice scheduling framework significantly improves the performance metrics in terms of total network delay, average price, satisfaction level, energy consumption rate (ECR), failure rate, and network throughput over other existing baselines.
Kernel-Based Positioning in Wireless Local Area Networks The recent proliferation of Location-Based Services (LBSs) has necessitated the development of effective indoor positioning solutions. In such a context, Wireless Local Area Network (WLAN) positioning is a particularly viable solution in terms of hardware and installation costs due to the ubiquity of WLAN infrastructures. This paper examines three aspects of the problem of indoor WLAN positioning using received signal strength (RSS). First, we show that, due to the variability of RSS features over space, a spatially localized positioning method leads to improved positioning results. Second, we explore the problem of access point (AP) selection for positioning and demonstrate the need for further research in this area. Third, we present a kernelized distance calculation algorithm for comparing RSS observations to RSS training records. Experimental results indicate that the proposed system leads to a 17 percent (0.56 m) improvement over the widely used K-nearest neighbor and histogram-based methods.
Sub-modularity and Antenna Selection in MIMO systems In this paper, we show that the optimal receive antenna subset selection problem for maximizing the mutual information in a point-to-point MIMO system is sub-modular. Consequently, a greedy step-wise optimization approach, where at each step, an antenna that maximizes the incremental gain is added to the existing antenna subset, is guaranteed to be within a (1-1/e)-fraction of the global optimal value independent of all parameters. For a single-antenna-equipped source and destination with multiple relays, we show that the relay antenna selection problem to maximize the mutual information is modular and a greedy step-wise optimization approach leads to an optimal solution.
TypeSQL: Knowledge-Based Type-Aware Neural Text-to-SQL Generation. Interacting with relational databases through natural language helps users of any background easily query and analyze a vast amount of data. This requires a system that understands usersu0027 questions and converts them to SQL queries automatically. In this paper we present a novel approach, TypeSQL, which views this problem as a slot filling task. Additionally, TypeSQL utilizes type information to better understand rare entities and numbers in natural language questions. We test this idea on the WikiSQL dataset and outperform the prior state-of-the-art by 5.5% in much less time. We also show that accessing the content of databases can significantly improve the performance when usersu0027 queries are not well-formed. TypeSQL gets 82.6% accuracy, a 17.5% absolute improvement compared to the previous content-sensitive model.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.044321
0.045667
0.042629
0.04
0.04
0.04
0.015463
0.001106
0.000022
0
0
0
0
0
ICRAN: Intelligent Control for Self-Driving RAN Based on Deep Reinforcement Learning Mobile networks are increasingly expected to support use cases with diverse performance expectations at a very high level of reliability. These expectations imply the need for approaches that timely detect and correct performance problems. However, current approaches often focus on optimizing a single performance metric. Here, we aim to address this gap by proposing a novel control framework that maximizes radio resources utilization and minimizes performance degradation in the most challenging part of cellular architecture that is the radio access network (RAN). We devise a method called Intelligent Control for Self-driving RAN (ICRAN) which involves two deep reinforcement learning based approaches that control the RAN in a centralized and a distributed way, respectively. ICRAN defines a dual-objective optimization goals that are achieved through a set of diverse control actions. Using extensive discrete event simulations, we confirm that ICRAN succeeds in achieving its design goals, showing a greater edge over competing approaches. We believe that ICRAN is implementable and can serve as an important point on the way to realizing self-driving mobile networks.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Novel Framework of Multi-Hop Wireless Charging for Sensor Networks Using Resonant Repeaters. Wireless charging has provided a convenient alternative to renew nodes’ energy in wireless sensor networks. Due to physical limitations, previous works have only considered recharging a single node at a time, which has limited efficiency and scalability. Recent advances on multi-hop wireless charging is gaining momentum and provides fundamental support to address this problem. However, existing single-node charging designs do not consider and cannot take advantage of such opportunities. In this paper, we propose a new framework to enable multi-hop wireless charging using resonant repeaters. First, we present a realistic model that accounts for detailed physical factors to calculate charging efficiencies. Second, to achieve balance between energy efficiency and data latency, we propose a hybrid data gathering strategy that combines static and mobile data gathering to overcome their respective drawbacks and provide theoretical analysis. Then, we formulate multi-hop recharge schedule into a bi-objective NP-hard optimization problem. We propose a two-step approximation algorithm that first finds the minimum charging cost and then calculates the charging vehicles’ moving costs with bounded approximation ratios. Finally, upon discovering more room to reduce the total system cost, we develop a post-optimization algorithm that iteratively adds more stopping locations for charging vehicles to further improve the results while ensuring none of the nodes will deplete battery energy. Our extensive simulations show that the proposed algorithms can handle dynamic energy demands effectively, and can cover at least three times of nodes and reduce service interruption time by an order of magnitude compared to the single-node charging scheme.
Mobility in wireless sensor networks - Survey and proposal. Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time.
Tag-based cooperative data gathering and energy recharging in wide area RFID sensor networks The Wireless Identification and Sensing Platform (WISP) conjugates the identification potential of the RFID technology and the sensing and computing capability of the wireless sensors. Practical issues, such as the need of periodically recharging WISPs, challenge the effective deployment of large-scale RFID sensor networks (RSNs) consisting of RFID readers and WISP nodes. In this view, the paper proposes cooperative solutions to energize the WISP devices in a wide-area sensing network while reducing the data collection delay. The main novelty is the fact that both data transmissions and energy transfer are based on the RFID technology only: RFID mobile readers gather data from the WISP devices, wirelessly recharge them, and mutually cooperate to reduce the data delivery delay to the sink. Communication between mobile readers relies on two proposed solutions: a tag-based relay scheme, where RFID tags are exploited to temporarily store sensed data at pre-determined contact points between the readers; and a tag-based data channel scheme, where the WISPs are used as a virtual communication channel for real time data transfer between the readers. Both solutions require: (i) clustering the WISP nodes; (ii) dimensioning the number of required RFID mobile readers; (iii) planning the tour of the readers under the energy and time constraints of the nodes. A simulative analysis demonstrates the effectiveness of the proposed solutions when compared to non-cooperative approaches. Differently from classic schemes in the literature, the solutions proposed in this paper better cope with scalability issues, which is of utmost importance for wide area networks.
Improving charging capacity for wireless sensor networks by deploying one mobile vehicle with multiple removable chargers. Wireless energy transfer is a promising technology to prolong the lifetime of wireless sensor networks (WSNs), by employing charging vehicles to replenish energy to lifetime-critical sensors. Existing studies on sensor charging assumed that one or multiple charging vehicles being deployed. Such an assumption may have its limitation for a real sensor network. On one hand, it usually is insufficient to employ just one vehicle to charge many sensors in a large-scale sensor network due to the limited charging capacity of the vehicle or energy expirations of some sensors prior to the arrival of the charging vehicle. On the other hand, although the employment of multiple vehicles can significantly improve the charging capability, it is too costly in terms of the initial investment and maintenance costs on these vehicles. In this paper, we propose a novel charging model that a charging vehicle can carry multiple low-cost removable chargers and each charger is powered by a portable high-volume battery. When there are energy-critical sensors to be charged, the vehicle can carry the chargers to charge multiple sensors simultaneously, by placing one portable charger in the vicinity of one sensor. Under this novel charging model, we study the scheduling problem of the charging vehicle so that both the dead duration of sensors and the total travel distance of the mobile vehicle per tour are minimized. Since this problem is NP-hard, we instead propose a (3+ϵ)-approximation algorithm if the residual lifetime of each sensor can be ignored; otherwise, we devise a novel heuristic algorithm, where ϵ is a given constant with 0 < ϵ ≤ 1. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Experimental results show that the performance of the proposed algorithms are very promising.
Speed control of mobile chargers serving wireless rechargeable networks. Wireless rechargeable networks have attracted increasing research attention in recent years. For charging service, a mobile charger is often employed to move across the network and charge all network nodes. To reduce the charging completion time, most existing works have used the “move-then-charge” model where the charger first moves to specific spots and then starts charging nodes nearby. As a result, these works often aim to reduce the moving delay or charging delay at the spots. However, the charging opportunity on the move is largely overlooked because the charger can charge network nodes while moving, which as we analyze in this paper, has the potential to greatly reduce the charging completion time. The major challenge to exploit the charging opportunity is the setting of the moving speed of the charger. When the charger moves slow, the charging delay will be reduced (more energy will be charged during the movement) but the moving delay will increase. To deal with this challenge, we formulate the problem of delay minimization as a Traveling Salesman Problem with Speed Variations (TSP-SV) which jointly considers both charging and moving delay. We further solve the problem using linear programming to generate (1) the moving path of the charger, (2) the moving speed variations on the path and (3) the stay time at each charging spot. We also discuss possible ways to reduce the calculation complexity. Extensive simulation experiments are conducted to study the delay performance under various scenarios. The results demonstrate that our proposed method achieves much less completion time compared to the state-of-the-art work.
A Prediction-Based Charging Policy and Interference Mitigation Approach in the Wireless Powered Internet of Things The Internet of Things (IoT) technology has recently drawn more attention due to its ability to achieve the interconnections of massive physic devices. However, how to provide a reliable power supply to energy-constrained devices and improve the energy efficiency in the wireless powered IoT (WP-IoT) is a twofold challenge. In this paper, we develop a novel wireless power transmission (WPT) system, where an unmanned aerial vehicle (UAV) equipped with radio frequency energy transmitter charges the IoT devices. A machine learning framework of echo state networks together with an improved <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -means clustering algorithm is used to predict the energy consumption and cluster all the sensor nodes at the next period, thus automatically determining the charging strategy. The energy obtained from the UAV by WPT supports the IoT devices to communicate with each other. In order to improve the energy efficiency of the WP-IoT system, the interference mitigation problem is modeled as a mean field game, where an optimal power control policy is presented to adapt and analyze the large number of sensor nodes randomly deployed in WP-IoT. The numerical results verify that our proposed dynamic charging policy effectively reduces the data packet loss rate, and that the optimal power control policy greatly mitigates the interference, and improve the energy efficiency of the whole network.
Design of Self-sustainable Wireless Sensor Networks with Energy Harvesting and Wireless Charging AbstractEnergy provisioning plays a key role in the sustainable operations of Wireless Sensor Networks (WSNs). Recent efforts deploy multi-source energy harvesting sensors to utilize ambient energy. Meanwhile, wireless charging is a reliable energy source not affected by spatial-temporal ambient dynamics. This article integrates multiple energy provisioning strategies and adaptive adjustment to accomplish self-sustainability under complex weather conditions. We design and optimize a three-tier framework with the first two tiers focusing on the planning problems of sensors with various types and distributed energy storage powered by environmental energy. Then we schedule the Mobile Chargers (MC) between different charging activities and propose an efficient, 4-factor approximation algorithm. Finally, we adaptively adjust the algorithms to capture real-time energy profiles and jointly optimize those correlated modules. Our extensive simulations demonstrate significant improvement of network lifetime (\(\)), increase of harvested energy (15%), reduction of network cost (30%), and the charging capability of MC by 100%.
A Survey on Mobility and Mobility-Aware MAC Protocols in Wireless Sensor Networks. In wireless sensor networks nodes can be static or mobile, depending on the application requirements. Dealing with mobility can pose some formidable challenges in protocol design, particularly, at the link layer. These difficulties require mobility adaptation algorithms to localize mobile nodes and predict the quality of link that can be established with them. This paper surveys the current state-...
The Evolution of Sink Mobility Management in Wireless Sensor Networks: A Survey Sink mobility has long been recognized as an efficient method of improving system performance in wireless sensor networks (WSNs), e.g. relieving traffic burden from a specific set of nodes. Though tremendous research efforts have been devoted to this topic during the last decades, yet little attention has been paid for the research summarization and guidance. This paper aims to fill in the blank and presents an up-to-date survey on the sink mobility issue. Its main contribution is to review mobility management schemes from an evolutionary point of view. The related schemes have been divided into four categories: uncontrollable mobility (UMM), pathrestricted mobility (PRM), location-restricted mobility (LRM) and unrestricted mobility (URM). Several representative solutions are described following the proposed taxonomy. To help readers comprehend the development flow within the category, the relationship among different solutions is outlined, with detailed descriptions as well as in-depth analysis. In this way, besides some potential extensions based on current research, we are able to identify several open issues that receive little attention or remain unexplored so far.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
A survey of mobile cloud computing: architecture, applications, and approaches. Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright (c) 2011 John Wiley & Sons, Ltd.
Ontology-based methods for enhancing autonomous vehicle path planning We report the results of a first implementation demonstrating the use of an ontology to support reasoning about obstacles to improve the capabilities and performance of on-board route planning for autonomous vehicles. This is part of an overall effort to evaluate the performance of ontologies in different components of an autonomous vehicle within the 4D/RCS system architecture developed at NIST. Our initial focus has been on simple roadway driving scenarios where the controlled vehicle encounters potential obstacles in its path. As reported elsewhere [C. Schlenoff, S. Balakirsky, M. Uschold, R. Provine, S. Smith, Using ontologies to aid navigation planning in autonomous vehicles, Knowledge Engineering Review 18 (3) (2004) 243–255], our approach is to develop an ontology of objects in the environment, in conjunction with rules for estimating the damage that would be incurred by collisions with different objects in different situations. Automated reasoning is used to estimate collision damage; this information is fed to the route planner to help it decide whether to plan to avoid the object. We describe the results of the first implementation that integrates the ontology, the reasoner and the planner. We describe our insights and lessons learned and discuss resulting changes to our approach.
Ear recognition: More than a survey. Automatic identity recognition from ear images represents an active field of research within the biometric community. The ability to capture ear images from a distance and in a covert manner makes the technology an appealing choice for surveillance and security applications as well as other application domains. Significant contributions have been made in the field over recent years, but open research problems still remain and hinder a wider (commercial) deployment of the technology. This paper presents an overview of the field of automatic ear recognition (from 2D images) and focuses specifically on the most recent, descriptor-based methods proposed in this area. Open challenges are discussed and potential research directions are outlined with the goal of providing the reader with a point of reference for issues worth examining in the future. In addition to a comprehensive review on ear recognition technology, the paper also introduces a new, fully unconstrained dataset of ear images gathered from the web and a toolbox implementing several state-of-the-art techniques for ear recognition. The dataset and toolbox are meant to address some of the open issues in the field and are made publicly available to the research community.
Active Suspension Control of Quarter-Car System With Experimental Validation A reliable, efficient, and simple control is presented and validated for a quarter-car active suspension system equipped with an electro-hydraulic actuator. Unlike the existing techniques, this control does not use any function approximation, e.g., neural networks (NNs) or fuzzy-logic systems (FLSs), while the unmolded dynamics, including the hydraulic actuator behavior, can be accommodated effectively. Hence, the heavy computational costs and tedious parameter tuning phase can be remedied. Moreover, both the transient and steady-state suspension performance can be retained by incorporating prescribed performance functions (PPFs) into the control implementation. This guaranteed performance is particularly useful for guaranteeing the safe operation of suspension systems. Apart from theoretical studies, some practical considerations of control implementation and several parameter tuning guidelines are suggested. Experimental results based on a practical quarter-car active suspension test-rig demonstrate that this control can obtain a superior performance and has better computational efficiency over several other control methods.
1.073556
0.066667
0.066667
0.066667
0.066667
0.066667
0.066667
0.034
0.013333
0
0
0
0
0
Routing Algorithms for MANET-IoT Networks: A Comprehensive Survey With the powerful evolution of wireless communication systems in recent years, mobile ad hoc networks (MANET) are more and more applied in many fields such as environment, energy efficiency, intelligent transport systems, smart agriculture, and IoT ecosystems, as well as expected to contribute role more and more important in the future Internet. However, due to the characteristic of the mobile ad hoc environment, the performance is dependent mainly on the deployed routing protocol and relative low. Therefore, routing protocols should be more flexible and intelligent to enhance network performance. This paper surveyed and analysed a series of recently proposed routing protocols for MANET-IoT networks. Results have shown that these protocols are classified into four main categories: performance improvement, quality of service (QoS-aware), energy-saving, and security-aware. Most protocols are evolved from these existing traditional protocols. Then, we compare the performance of the four traditional routing protocols under the different movement speeds of the network node aim determines the most stable routing protocol in smart cities environments. The experimental results showed that the proactive protocol work is good when the movement network nodes are low. However, the reactive protocols have more stable and high performance for high movement network scenarios. Thus, we confirm that the proposal of the routing protocols for MANET becomes more suitable based on improving the ad hoc on-demand distance vector routing protocol. This study is the premise for our further in-depth research on IoT ecosystems.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Virtual-Real Interaction Approach to Object Instance Segmentation in Traffic Scenes Object instance segmentation in traffic scenes is an important research topic. For training instance segmentation models, synthetic data can potentially complement real data, alleviating manual effort on annotating real images. However, the data distribution discrepancy between synthetic data and real data hampers the wide applications of synthetic data. In light of that, we propose a virtual-real interaction method for object instance segmentation. This method works over synthetic images with accurate annotations and real images without any labels. The virtual-real interaction guides the model to learn useful information from synthetic data while keeping consistent with real data. We first analyze the data distribution discrepancy from a probabilistic perspective, and divide it into image-level and instance-level discrepancies. Then, we design two components to align these discrepancies, i.e., global-level alignment and local-level alignment. Furthermore, a consistency alignment component is proposed to encourage the consistency between the global-level and the local-level alignment components. We evaluate the proposed approach on the real Cityscapes dataset by adapting from virtual SYNTHIA, Virtual KITTI, and VIPER datasets. The experimental results demonstrate that it achieves significantly better performance than state-of-the-art methods.
Training and Testing Object Detectors with Virtual Images. In the area of computer vision, deep learning has produced a variety of state-of-the-art models that rely on massive labeled data. However, collecting and annotating images from the real world is too demanding in terms of labor and money investments, and is usually inflexible to build datasets with specific characteristics, such as small area of objects and high occlusion level. Under the framewor...
Soft-Weighted-Average Ensemble Vehicle Detection Method Based on Single-Stage and Two-Stage Deep Learning Models The deep learning object detection algorithms have become one of the powerful tools for road vehicle detection in autonomous driving. However, the limitation of the number of high-quality labeled training samples makes the single-object detection algorithms unable to achieve satisfactory accuracy in road vehicle detection. In this paper, by comparing the pros and cons of various object detection algorithms, two different algorithms with a different emphasis are selected for a weighted ensemble. Besides, a new ensemble method named the Soft-Weighted-Average method is proposed. The proposed method is attenuated by the confidence, and it “punishes” the detection result of the corresponding relationship by the confidence attenuation, instead of by deleting the output of a certain model. The proposed method can further reduce the vehicle misdetection of the target detection algorithm, obtaining a better detection result. Lastly, the ensemble method can achieve an average accuracy of 94.75% for simple targets, which makes it the third-ranked method in the KITTI evaluation system.
Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges AbstractRecent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of “what to fuse”, “when to fuse”, and “how to fuse” remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
SUPER: A Novel Lane Detection System AI-based lane detection algorithms were actively studied over the last few years. Many have demonstrated superior performance compared with traditional feature-based methods. However, most methods remain riddled with assumptions and limitations, still not good enough for safe and reliable driving in the real world. In this paper, we propose a novel lane detection system, called Scene Understanding...
Towards collaborative robotics in top view surveillance: A framework for multiple object tracking by detection using deep learning Collaborative Robotics is one of the high-interest research topics in the area of academia and industry. It has been progressively utilized in numerous applications, particularly in intelligent surveillance systems. It allows the deployment of smart cameras or optical sensors with computer vision techniques, which may serve in several object detection and tracking tasks. These tasks have been cons...
Perimeter Control of Multiregion Urban Traffic Networks With Time-Varying Delays In this paper, an adaptive perimeter control problem is studied for urban traffic networks with multiple regions, time-varying state, and input delays. After defining state variables by partition the accumulation variable of each region, a system model is formulated as nonlinear ordinary differential equations based on the concept of macroscopic fundamental diagram. Both the travel times of vehicles as well as evacuation process of traffic jams are first introduced into the system dynamics, and they are modeled as input and state delays, respectively. The control objective is to stabilize the number of vehicles in each region to desired values. By employing the model reference adaptive control scheme and asymptotical sliding mode technique, two filters and adaptive laws for control parameters are designed by using only the information of the reference model. With properly constructed Lyapunov functions, the stability of tracking error with regard to the reference signals is analyzed. Lastly, a simulation example is given to demonstrate the effectiveness of the proposed methods.
Online Optimal Control of Affine Nonlinear Discrete-Time Systems With Unknown Internal Dynamics by Using Time-Based Policy Update. In this paper, the Hamilton-Jacobi-Bellman equation is solved forward-in-time for the optimal control of a class of general affine nonlinear discrete-time systems without using value and policy iterations. The proposed approach, referred to as adaptive dynamic programming, uses two neural networks (NNs), to solve the infinite horizon optimal regulation control of affine nonlinear discrete-time sys...
An introduction to ROC analysis Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
Data MULEs: modeling and analysis of a three-tier architecture for sparse sensor networks This paper presents and analyzes a three-tier architecture for collecting sensor data in sparse sensor networks. Our approach exploits the presence of mobile entities (called MULEs) present in the environment. When in close range, MULEs pick up data from the sensors, buffer it, and deliver it to wired access points. This can lead to substantial power savings at the sensors as they only have to transmit over a short-range. This paper focuses on a simple analytical model for understanding performance as system parameters are scaled. Our model assumes a two-dimensional random walk for mobility and incorporates key system variables such as number of MULEs, sensors and access points. The performance metrics observed are the data success rate (the fraction of generated data that reaches the access points), latency and the required buffer capacities on the sensors and the MULEs. The modeling and simulation results can be used for further analysis and provide certain guidelines for deployment of such systems.
Multistep-Ahead time series prediction Multistep-ahead prediction is the task of predicting a sequence of values in a time series. A typical approach, known as multi-stage prediction, is to apply a predictive model step-by-step and use the predicted value of the current time step to determine its value in the next time step. This paper examines two alternative approaches known as independent value prediction and parameter prediction. The first approach builds a separate model for each prediction step using the values observed in the past. The second approach fits a parametric function to the time series and builds models to predict the parameters of the function. We perform a comparative study on the three approaches using multiple linear regression, recurrent neural networks, and a hybrid of hidden Markov model with multiple linear regression. The advantages and disadvantages of each approach are analyzed in terms of their error accumulation, smoothness of prediction, and learning difficulty.
Charge selection algorithms for maximizing sensor network life with UAV-based limited wireless recharging Monitoring bridges with wireless sensor networks aids in detecting failures early, but faces power challenges in ensuring reasonable network lifetimes. Recharging select nodes with Unmanned Aerial Vehicles (UAVs) provides a solution that currently can recharge a single node. However, questions arise on the effectiveness of a limited recharging system, the appropriate node to recharge, and the best sink selection algorithm for improving network lifetime given a limited recharging system. This paper simulates such a network in order to answer those questions. It explores five different sink positioning algorithms to find which provides the longest network lifetime with the added capability of limited recharging. For a range of network sizes, our results show that network lifetime improves by over 350% when recharging a single node in the network, the best node to recharge is the one with the lowest power level, and that either the Greedy Heuristic or LP sink selection algorithms perform equally well.
AIMOES: Archive information assisted multi-objective evolutionary strategy for ab initio protein structure prediction. •Treating protein structure prediction (PSP) problem as a multi-objective optimization problem is designed.•A multi-objective evolutionary algorithm which reuses past search experience to enhance search capacity is proposed.•A novel method to measure the similarity between two proteins’ conformations in genotype space is introduced.•A complete test on 25 proteins is carried out to verify the effectiveness of reusing strategy.
Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem Group scheduling problems have attracted much attention owing to their many practical applications. This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time. It is originated from an important industrial process, i.e., wire rod and bar rolling process in steel production systems. Two objecti...
1.084444
0.08
0.08
0.066667
0.066667
0.066667
0.033333
0.013333
0
0
0
0
0
0
Analysis of feature selection stability on high dimension and small sample data Feature selection is an important step when building a classifier on high dimensional data. As the number of observations is small, the feature selection tends to be unstable. It is common that two feature subsets, obtained from different datasets but dealing with the same classification problem, do not overlap significantly. Although it is a crucial problem, few works have been done on the selection stability. The behavior of feature selection is analyzed in various conditions, not exclusively but with a focus on t -score based feature selection approaches and small sample data. The analysis is in three steps: the first one is theoretical using a simple mathematical model; the second one is empirical and based on artificial data; and the last one is based on real data. These three analyses lead to the same results and give a better understanding of the feature selection problem in high dimension data.
A space-time delay neural network model for travel time prediction. Research on space-time modelling and forecasting has focused on integrating space-time autocorrelation into statistical models to increase the accuracy of forecasting. These models include space-time autoregressive integrated moving average (STARIMA) and its various extensions. However, they are inadequate for the cases when the correlation between data is dynamic and heterogeneous, such as traffic network data. The aim of the paper is to integrate spatial and temporal autocorrelations of road traffic network by developing a novel space-time delay neural network (STDNN) model that capture the autocorrelation locally and dynamically. Validation of the space-time delay neural network is carried out using real data from London road traffic network with 22 links by comparing benchmark models such as Naïve, ARIMA, and STARIMA models. Study results show that STDNN outperforms the Naïve, ARIMA, and STARIMA models in prediction accuracy and has considerable advantages in travel time prediction.
Discovering spatio-temporal dependencies based on time-lag in intelligent transportation data. Learning spatio-temporal dependency structure is meaningful to characterize causal or statistical relationships. In many real-world applications, dependency structure is often characterized by time-lag between variables. For example, traffic system and climate, time lag is a key feature of hidden temporal dependencies, and plays an essential role in interpreting the cause of discovered temporal dependencies. However, traditional dependencies learning algorithms only use the same time stamp data of variables. In this paper, we propose a method for mining dependencies by considering the time lag. The proposed approach is based on a decomposition of the coefficients into products of two-level hierarchical coefficients, where one represents feature-level and the other represents time-level. Specially, we capture the prior information of time lag in intelligent transportation data. We construct a probabilistic formulation by applying some probabilistic priors to these hierarchical coefficients, and devise an expectation-maximization (EM) algorithm to learn the model parameters. We evaluate our model on both synthetic and real-world highway traffic datasets. Experimental results show the effectiveness of our method.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Short-Term Traffic Forecasting by Mining the Non-Stationarity of Spatiotemporal Patterns Short-term traffic forecasting is important for the development of an intelligent traffic management system. Critical to the performance of the traffic prediction model utilized in such a system is accurate representation of the spatiotemporal traffic characteristics. This can be achieved by integrating spatiotemporal traffic information or the dynamic traffic characteristics in the modeling proce...
Subway Passenger Flow Prediction for Special Events Using Smart Card Data In order to reduce passenger delays and prevent severe overcrowding in the subway system, it is necessary to accurately predict the short-term passenger flow during special events. However, few studies have been conducted to predict the subway passenger flow under these conditions. Traditional methods, such as the autoregressive integrated moving average (ARIMA) model, were commonly used to analyze regular traffic demands. These methods usually neglected the volatility (heteroscedasticity) in passenger flow influenced by unexpected external factors. This paper, therefore, proposed a generic framework to analyze short-term passenger flow, considering the dynamic volatility and nonlinearity of passenger flow during special events. Four different generalized autoregressive conditional heteroscedasticity models, along with the ARIMA model, were used to model the mean and volatility of passenger flow based on the transit smart card data from two stations near the Olympic Sports Center, Nanjing, China. Multiple statistical methods were applied to evaluate the performance of the hybrid models. The results indicate that the volatility of passenger flow had significant nonlinear and asymmetric features during special events. The proposed framework could effectively capture the mean and volatility of passenger flow, and outperform the traditional methods in terms of accuracy and reliability. Overall, this paper can help transit agencies to better understand the deterministic and stochastic changes of the passenger flow, and implement precautionary countermeasures for large crowds in a subway station before and after special events.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
An online mechanism for multi-unit demand and its application to plug-in hybrid electric vehicle charging We develop an online mechanism for the allocation of an expiring resource to a dynamic agent population. Each agent has a non-increasing marginal valuation function for the resource, and an upper limit on the number of units that can be allocated in any period. We propose two versions on a truthful allocation mechanism. Each modifies the decisions of a greedy online assignment algorithm by sometimes cancelling an allocation of resources. One version makes this modification immediately upon an allocation decision while a second waits until the point at which an agent departs the market. Adopting a prior-free framework, we show that the second approach has better worst-case allocative efficiency and is more scalable. On the other hand, the first approach (with immediate cancellation) may be easier in practice because it does not need to reclaim units previously allocated. We consider an application to recharging plug-in hybrid electric vehicles (PHEVs). Using data from a real-world trial of PHEVs in the UK, we demonstrate higher system performance than a fixed price system, performance comparable with a standard, but non-truthful scheduling heuristic, and the ability to support 50% more vehicles at the same fuel cost than a simple randomized policy.
Blockchain Meets IoT: An Architecture for Scalable Access Management in IoT. The Internet of Things (IoT) is stepping out of its infancy into full maturity and establishing itself as a part of the future Internet. One of the technical challenges of having billions of devices deployed worldwide is the ability to manage them. Although access management technologies exist in IoT, they are based on centralized models which introduce a new variety of technical limitations to ma...
The contourlet transform: an efficient directional multiresolution image representation. The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications. Index Terms-Contourlets, contours, filter banks, geometric image processing, multidirection, multiresolution, sparse representation, wavelets.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Vibration Personalization with Evolutionary Algorithms. This demo presents a genetic algorithm to optimize the parameters length and intensity of vibrations signals deployed in vibration patterns. The participants are able to interact with the system and personalize their vibrations signals.
Picbreeder: evolving pictures collaboratively online Picbreeder is an online service that allows users to collaboratively evolve images. Like in other Interactive Evolutionary Computation (IEC) programs, users evolve images on Picbreeder by selecting ones that appeal to them to produce a new generation. However, Picbreeder also offers an online community in which to share these images, and most importantly, the ability to continue evolving others' images. Through this process of branching from other images, and through continually increasing image complexity made possible by the NeuroEvolution of Augmenting Topologies (NEAT) algorithm, evolved images proliferate unlike in any other current IEC systems. Participation requires no explicit talent from the users, thereby opening Picbreeder to the entire Internet community. This paper details how Picbreeder encourages innovation, featuring images that were collaboratively evolved.
On the effect of mirroring in the IPOP active CMA-ES on the noiseless BBOB testbed Mirrored mutations and active covariance matrix adaptation are two recent ideas to improve the well-known covariance matrix adaptation evolution strategy (CMA-ES)---a state-of-the-art algorithm for numerical optimization. It turns out that both mechanisms can be implemented simultaneously. In this paper, we investigate the impact of mirrored mutations on the so-called IPOP active CMA-ES. We find that additional mirrored mutations improve the IPOP active CMA-ES statistically significantly, but by only a small margin, on several functions while never a statistically significant performance decline can be observed. Furthermore, experiments on different function instances with some algorithm parameters and stopping criteria changed reveal essentially the same results.
Importance of Matching Physical Friction, Hardness, and Texture in Creating Realistic Haptic Virtual Surfaces. Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
VibViz: Organizing, visualizing and navigating vibration libraries With haptics now common in consumer devices, diversity in tactile perception and aesthetic preferences confound haptic designers. End-user customization out of example sets is an obvious solution, but haptic collections are notoriously difficult to explore. This work addresses the provision of easy and highly navigable access to large, diverse sets of vibrotactile stimuli, on the premise that multiple access pathways facilitate discovery and engagement. We propose and examine five disparate organization schemes (taxonomies), describe how we created a 120-item library with diverse functional and affective characteristics, and present VibViz, an interactive tool for end-user library navigation and our own investigation of how different taxonomies can assist navigation. An exploratory user study with and of VibViz suggests that most users gravitate towards an organization based on sensory and emotional terms, but also exposes rich variations in their navigation patterns and insights into the basis of effective haptic library navigation.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
Massive MIMO for next generation wireless systems Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Communication theory of secrecy systems THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.1 In this paper a theory of secrecy systems is developed. The approach is on a theoretical level and is intended to complement the treatment found in standard works on cryptography.2 There, a detailed study is made of the many standard types of codes and ciphers, and of the ways of breaking them. We will be more concerned with the general mathematical structure and properties of secrecy systems.
A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization In recent years, there has been a growing interest for the experimental analysis in the field of evolutionary algorithms. It is noticeable due to the existence of numerous papers which analyze and propose different types of problems, such as the basis for experimental comparisons of algorithms, proposals of different methodologies in comparison or proposals of use of different statistical techniques in algorithms’ comparison.In this paper, we focus our study on the use of statistical techniques in the analysis of evolutionary algorithms’ behaviour over optimization problems. A study about the required conditions for statistical analysis of the results is presented by using some models of evolutionary algorithms for real-coding optimization. This study is conducted in two ways: single-problem analysis and multiple-problem analysis. The results obtained state that a parametric statistical analysis could not be appropriate specially when we deal with multiple-problem results. In multiple-problem analysis, we propose the use of non-parametric statistical tests given that they are less restrictive than parametric ones and they can be used over small size samples of results. As a case study, we analyze the published results for the algorithms presented in the CEC’2005 Special Session on Real Parameter Optimization by using non-parametric test procedures.
Implementing Vehicle Routing Algorithms
GROPING: Geomagnetism and cROwdsensing Powered Indoor NaviGation Although a large number of WiFi fingerprinting based indoor localization systems have been proposed, our field experience with Google Maps Indoor (GMI), the only system available for public testing, shows that it is far from mature for indoor navigation. In this paper, we first report our field studies with GMI, as well as experiment results aiming to explain our unsatisfactory GMI experience. Then motivated by the obtained insights, we propose GROPING as a self-contained indoor navigation system independent of any infrastructural support. GROPING relies on geomagnetic fingerprints that are far more stable than WiFi fingerprints, and it exploits crowdsensing to construct floor maps rather than expecting individual venues to supply digitized maps. Based on our experiments with 20 participants in various floors of a big shopping mall, GROPING is able to deliver a sufficient accuracy for localization and thus provides smooth navigation experience.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Haptic feedback for enhancing realism of walking simulations. In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.
A global averaging method for dynamic time warping, with applications to clustering Mining sequential data is an old topic that has been revived in the last decade, due to the increasing availability of sequential datasets. Most works in this field are centred on the definition and use of a distance (or, at least, a similarity measure) between sequences of elements. A measure called dynamic time warping (DTW) seems to be currently the most relevant for a large panel of applications. This article is about the use of DTW in data mining algorithms, and focuses on the computation of an average of a set of sequences. Averaging is an essential tool for the analysis of data. For example, the K-means clustering algorithm repeatedly computes such an average, and needs to provide a description of the clusters it forms. Averaging is here a crucial step, which must be sound in order to make algorithms work accurately. When dealing with sequences, especially when sequences are compared with DTW, averaging is not a trivial task. Starting with existing techniques developed around DTW, the article suggests an analysis framework to classify averaging techniques. It then proceeds to study the two major questions lifted by the framework. First, we develop a global technique for averaging a set of sequences. This technique is original in that it avoids using iterative pairwise averaging. It is thus insensitive to ordering effects. Second, we describe a new strategy to reduce the length of the resulting average sequence. This has a favourable impact on performance, but also on the relevance of the results. Both aspects are evaluated on standard datasets, and the evaluation shows that they compare favourably with existing methods. The article ends by describing the use of averaging in clustering. The last section also introduces a new application domain, namely the analysis of satellite image time series, where data mining techniques provide an original approach.
Touch Is Everywhere: Floor Surfaces as Ambient Haptic Interfaces Floor surfaces are notable for the diverse roles that they play in our negotiation of everyday environments. Haptic communication via floor surfaces could enhance or enable many computer-supported activities that involve movement on foot. In this paper, we discuss potential applications of such interfaces in everyday environments and present a haptically augmented floor component through which several interaction methods are being evaluated. We describe two approaches to the design of structured vibrotactile signals for this device. The first is centered on a musical phrase metaphor, as employed in prior work on tactile display. The second is based upon the synthesis of rhythmic patterns of virtual physical impact transients. We report on an experiment in which participants were able to identify communication units that were constructed from these signals and displayed via a floor interface at well above chance levels. The results support the feasibility of tactile information display via such interfaces and provide further indications as to how to effectively design vibrotactile signals for them.
Ambiotherm: Enhancing Sense of Presence in Virtual Reality by Simulating Real-World Environmental Conditions. In this paper, we present and evaluate Ambiotherm, a wearable accessory for Head Mounted Displays (HMD) that provides thermal and wind stimuli to simulate real-world environmental conditions, such as ambient temperatures and wind conditions, to enhance the sense of presence in Virtual Reality (VR). Ambiotherm consists of a Ambient Temperature Module that is attached to the user's neck, a Wind Simulation Module focused towards the user's face, and a Control Module utilizing Bluetooth communication. We demonstrate Ambiotherm with two VR environments, a hot desert, and a snowy mountain, to showcase the different types of simulated environmental conditions. We conduct several studies to 1) address design factors of the system and 2) evaluate Ambiotherm's effect on factors related to a user's sense of presence. Our findings show that the addition of wind and thermal stimuli significantly improves sensory and realism factors, contributing towards an enhanced sense of presence when compared to traditional VR experiences.
Adding Proprioceptive Feedback to Virtual Reality Experiences Using Galvanic Vestibular Stimulation. We present a small and lightweight wearable device that enhances virtual reality experiences and reduces cybersickness by means of galvanic vestibular stimulation (GVS). GVS is a specific way to elicit vestibular reflexes that has been used for over a century to study the function of the vestibular system. In addition to GVS, we support physiological sensing by connecting heart rate, electrodermal activity and other sensors to our wearable device using a plug and play mechanism. An accompanying Android app communicates with the device over Bluetooth (BLE) for transmitting the GVS stimulus to the user through electrodes attached behind the ears. Our system supports multiple categories of virtual reality applications with different types of virtual motion such as driving, navigating by flying, teleporting, or riding. We present a user study in which participants (N = 20) experienced significantly lower cybersickness when using our device and rated experiences with GVS-induced haptic feedback as significantly more immersive than a no-GVS baseline.
Improvement of olfactory display using solenoid valves The research on olfactory sense in virtual reality has gradually expanded even though the technology is still premature. We have developed an olfactory display composed of multiple solenoid valves. In the present study, an extended olfactory display, where 32 component odors can be blended in any recipe, is described; the previous version has only 8 odor components. The size was unchanged even though the number of odor components was four times larger than that in the previous display. The complexity of blending was greatly reduced because of algorithm improvement. The blending method and the fundamental experiment using a QCM (quartz crystal microbalance) sensor are described here
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize. An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees' swarming around their hive is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used for optimizing multivariable functions and the results produced by ABC, Genetic Algorithm (GA), Particle Swarm Algorithm (PSO) and Particle Swarm Inspired Evolutionary Algorithm (PS-EA) have been compared. The results showed that ABC outperforms the other algorithms.
Toward Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions The ever-increasing number of resource-constrained machine-type communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced mobile broadband (eMBB), massive machine type communications (mMTCs), and ultra-reliable and low latency communications (URLLCs), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include quality of service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead, and radio access network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy random access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions toward addressing RAN congestion problem, and then identify potential advantages, challenges, and use cases for the applications of emerging machine learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula> -learning approach in the mMTC scenario along with the recent advances toward enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions.
Priced Oblivious Transfer: How to Sell Digital Goods We consider the question of protecting the privacy of customers buying digital goods. More specifically, our goal is to allow a buyer to purchase digital goods from a vendor without letting the vendor learn what, and to the extent possible also when and how much, it is buying. We propose solutions which allow the buyer, after making an initial deposit, to engage in an unlimited number of priced oblivious-transfer protocols, satisfying the following requirements: As long as the buyer's balance contains sufficient funds, it will successfully retrieve the selected item and its balance will be debited by the item's price. However, the buyer should be unable to retrieve an item whose cost exceeds its remaining balance. The vendor should learn nothing except what must inevitably be learned, namely, the amount of interaction and the initial deposit amount (which imply upper bounds on the quantity and total price of all information obtained by the buyer). In particular, the vendor should be unable to learn what the buyer's current balance is or when it actually runs out of its funds. The technical tools we develop, in the process of solving this problem, seem to be of independent interest. In particular, we present the first one-round (two-pass) protocol for oblivious transfer that does not rely on the random oracle model (a very similar protocol was independently proposed by Naor and Pinkas [21]). This protocol is a special case of a more general "conditional disclosure" methodology, which extends a previous approach from [11] and adapts it to the 2-party setting.
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
A multi-objective and PSO based energy efficient path design for mobile sink in wireless sensor networks. Data collection through mobile sink (MS) in wireless sensor networks (WSNs) is an effective solution to the hot-spot or sink-hole problem caused by multi-hop routing using the static sink. Rendezvous point (RP) based MS path design is a common and popular technique used in this regard. However, design of the optimal path is a well-known NP-hard problem. Therefore, an evolutionary approach like multi-objective particle swarm optimization (MOPSO) can prove to be a very promising and reasonable approach to solve the same. In this paper, we first present a Linear Programming formulation for the stated problem and then, propose an MOPSO-based algorithm to design an energy efficient trajectory for the MS. The algorithm is presented with an efficient particle encoding scheme and derivation of a proficient multi-objective fitness function. We use Pareto dominance in MOPSO for obtaining both local and global best guides for each particle. We carry out rigorous simulation experiments on the proposed algorithm and compare the results with two existing algorithms namely, tree cluster based data gathering algorithm (TCBDGA) and energy aware sink relocation (EASR). The results demonstrate that the proposed algorithm performs better than both of them in terms of various performance metrics. The results are also validated through the statistical test, analysis of variance (ANOVA) and its least significant difference (LSD) post hoc analysis.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
Dynamic Graph-Based Feature Learning With Few Edges Considering Noisy Samples for Rotating Machinery Fault Diagnosis Due to its ability to learn the relationship among nodes from graph data, the graph convolution network (GCN) has received extensive attention. In the machine fault diagnosis field, it needs to construct input graphs reflecting features and relationships of the monitoring signals. Thus, the quality of the input graph affects the diagnostic performance. But it still has two limitations: 1) the cons...
Analysing user physiological responses for affective video summarisation. Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.
On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes Driver behavioral cues may present a rich source of information and feedback for future intelligent advanced driver-assistance systems (ADASs). With the design of a simple and robust ADAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. Eye gaze may provide a more accurate proxy than head movement for determining driver attention, whereas the measurement of head motion is less cumbersome and more reliable in harsh driving conditions. We use a lane-change intent-prediction system (McCall et al., 2007) to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane-change maneuver at a particular point in the future. Quantitative results from a naturalistic driving study are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction. The addition of eye gaze does not improve performance as much as simpler head dynamics cues. The advantage of head data over eye data is shown to be statistically significant (p
Detection of Driver Fatigue Caused by Sleep Deprivation This paper aims to provide reliable indications of driver drowsiness based on the characteristics of driver-vehicle interaction. A test bed was built under a simulated driving environment, and a total of 12 subjects participated in two experiment sessions requiring different levels of sleep (partial sleep-deprivation versus no sleep-deprivation) before the experiment. The performance of the subjects was analyzed in a series of stimulus-response and routine driving tasks, which revealed the performance differences of drivers under different sleep-deprivation levels. The experiments further demonstrated that sleep deprivation had greater effect on rule-based than on skill-based cognitive functions: when drivers were sleep-deprived, their performance of responding to unexpected disturbances degraded, while they were robust enough to continue the routine driving tasks such as lane tracking, vehicle following, and lane changing. In addition, we presented both qualitative and quantitative guidelines for designing drowsy-driver detection systems in a probabilistic framework based on the paradigm of Bayesian networks. Temporal aspects of drowsiness and individual differences of subjects were addressed in the framework.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
3D separable convolutional neural network for dynamic hand gesture recognition. •The Frame Difference method is used to pre-process the input in order to filter the background.•A 3D separable CNN is proposed for dynamic gesture recognition. The standard 3D convolution process is decomposed into two processes: 3D depth-wise and 3D point-wise.•By the application of skip connection and layer-wise learning rate, the undesirable gradient dispersion due to the separation operation is solved and the performance of the network is improved.•A dynamic hand gesture library is built through HoloLens.
Cooperative channel assignment for VANETs based on multiagent reinforcement learning. Dynamic channel assignment (DCA) plays a key role in extending vehicular ad-hoc network capacity and mitigating congestion. However, channel assignment under vehicular direct communication scenarios faces mutual influence of large-scale nodes, the lack of centralized coordination, unknown global state information, and other challenges. To solve this problem, a multiagent reinforcement learning (RL) based cooperative DCA (RL-CDCA) mechanism is proposed. Specifically, each vehicular node can successfully learn the proper strategies of channel selection and backoff adaptation from the real-time channel state information (CSI) using two cooperative RL models. In addition, neural networks are constructed as nonlinear Q-function approximators, which facilitates the mapping of the continuously sensed input to the mixed policy output. Nodes are driven to locally share and incorporate their individual rewards such that they can optimize their policies in a distributed collaborative manner. Simulation results show that the proposed multiagent RL-CDCA can better reduce the one-hop packet delay by no less than 73.73%, improve the packet delivery ratio by no less than 12.66% on average in a highly dense situation, and improve the fairness of the global network resource allocation.
Reinforcement learning based data fusion method for multi-sensors In order to improve detection system robustness and reliability, multi-sensors fusion is used in modern air combat. In this paper, a data fusion method based on reinforcement learning is developed for multi-sensors. Initially, the cubic B-spline interpolation is used to solve time alignment problems of multisource data. Then, the reinforcement learning based data fusion (RLBDF) method is proposed to obtain the fusion results. With the case that the priori knowledge of target is obtained, the fusion accuracy reinforcement is realized by the error between fused value and actual value. Furthermore, the Fisher information is instead used as the reward if the priori knowledge is unable to be obtained. Simulations results verify that the developed method is feasible and effective for the multi-sensors data fusion in air combat.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning The literature on short-term traffic flow forecasting has undergone great development recently. Many works, describing a wide variety of different approaches, which very often share similar features and ideas, have been published. However, publications presenting new prediction algorithms usually employ different settings, data sets, and performance measurements, making it difficult to infer a clear picture of the advantages and limitations of each model. The aim of this paper is twofold. First, we review existing approaches to short-term traffic flow forecasting methods under the common view of probabilistic graphical models, presenting an extensive experimental comparison, which proposes a common baseline for their performance analysis and provides the infrastructure to operate on a publicly available data set. Second, we present two new support vector regression models, which are specifically devised to benefit from typical traffic flow seasonality and are shown to represent an interesting compromise between prediction accuracy and computational efficiency. The SARIMA model coupled with a Kalman filter is the most accurate model; however, the proposed seasonal support vector regressor turns out to be highly competitive when performing forecasts during the most congested periods.
TSCA: A Temporal-Spatial Real-Time Charging Scheduling Algorithm for On-Demand Architecture in Wireless Rechargeable Sensor Networks. The collaborative charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles (WCVs) to sensors, providing a new paradigm to prolong network lifetime. Existing techniques on collaborative charging usually take the periodical and deterministic approach, but neglect influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for large-scale WRSNs. In this paper, we develop a temporal-spatial charging scheduling algorithm, namely TSCA, for the on-demand charging architecture. We aim to minimize the number of dead nodes while maximizing energy efficiency to prolong network lifetime. First, after gathering charging requests, a WCV will compute a feasible movement solution. A basic path planning algorithm is then introduced to adjust the charging order for better efficiency. Furthermore, optimizations are made in a global level. Then, a node deletion algorithm is developed to remove low efficient charging nodes. Lastly, a node insertion algorithm is executed to avoid the death of abandoned nodes. Extensive simulations show that, compared with state-of-the-art charging scheduling algorithms, our scheme can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.
A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
Live Model Transformations Driven by Incremental Pattern Matching In the current paper, we introduce a live model transformation framework, which continuously maintains a transformation context such that model changes to source inputs can be readily identified, and their effects can be incrementally propagated. Our framework builds upon an incremental pattern matcher engine, which keeps track of matches of complex contextual constraints captured in the form of graph patterns. As a result, complex model changes can be treated as elementary change events. Reactions to the changes of match sets are specified by graph transformation rules with a novel transactional execution semantics incorporating both pseudo-parallel and serializable behaviour.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Automated Demand Response From Home Energy Management System Under Dynamic Pricing and Power and Comfort Constraints This paper presents a comprehensive and general optimization-based home energy management controller, incorporating several classes of domestic appliances including deferrable, curtailable, thermal, and critical ones. The operations of the appliances are controlled in response to dynamic price signals to reduce the consumer's electricity bill whilst minimizing the daily volume of curtailed energy, and therefore considering the user's comfort level. To avoid shifting a large portion of consumer demand toward the least price intervals, which could create network issues due to loss of diversity, higher prices are applied when the consumer's demand goes beyond a prescribed power threshold. The arising mixed integer nonlinear optimization problem is solved in an iterative manner rolling throughout the day to follow the changes in the anticipated price signals and the variations in the controller inputs while information is updated. The results from different realistic case studies show the effectiveness of the proposed controller in minimizing the household's daily electricity bill while {preserving} comfort level, as well as preventing creation of new least-price peaks.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Detection and Estimation of Equatorial Spread F Scintillations Using Synthetic Aperture Radar. A significant amount of the data acquired by sun-synchronous space-borne low-frequency synthetic aperture radars (SARs) through the postsunset equatorial sector are distorted by the ionospheric scintillations due to the presence of plasma irregularities and their zonal and vertical drift. In the focused SAR images, the distortions due to the postsunset equatorial ionospheric scintillations appear ...
Prediction, Detection, and Correction of Faraday Rotation in Full-Polarimetric L-Band SAR Data With the synthetic aperture radar (SAR) sensor PALSAR onboard the Advanced Land Observing Satellite, a new full-polarimetric spaceborne L-band SAR instrument has been launched into orbit. At L-band, Faraday rotation (FR) can reach significant values, degrading the quality of the received SAR data. One-way rotations exceeding 25 deg are likely to happen during the lifetime of PALSAR, which will significantly reduce the accuracy of geophysical parameter recovery if uncorrected. Therefore, the estimation and correction of FR effects is a prerequisite for data quality and continuity. In this paper, methods for estimating FR are presented and analyzed. The first unambiguous detection of FR in SAR data is presented. A set of real data examples indicates the quality and sensitivity of FR estimation from PALSAR data, allowing the measurement of FR with high precision in areas where such measurements were previously inaccessible. In examples, we present the detection of kilometer-scale ionospheric disturbances, a spatial scale that is not detectable by ground-based GPS measurements. An FR prediction method is presented and validated. Approaches to correct for the estimated FR effects are applied, and their effectiveness is tested on real data.
Assessing Performance of L- and P-Band Polarimetric Interferometric SAR Data in Estimating Boreal Forest Above-Ground Biomass. Biomass estimation performance using polarimetric interferometric synthetic aperture radar (PolInSAR) data is evaluated at L- and P-band frequencies over boreal forest. PolInSAR data are decomposed into ground and volume contributions, retrieving vertical forest structure and polarimetric layer characteristics. The sensitivity of biomass to the obtained parameters is analyzed, and a set of these p...
Multi-Subaperture PGA for SAR Autofocusing For spotlight mode synthetic aperture radar (SAR) autofocusing, the traditional full-aperture phase gradient autofocus (PGA) algorithm might suffer from performance degradation in the presence of significant high-order phase error and residual range cell migration (RCM), which tend to occur when the coherent processing interval (CPI) is long. Meanwhile, PGA does not perform satisfactorily when applied directly on the stripmap data. To address these shortcomings, we present a multi-subaperture PGA algorithm, which takes advantage of the map drift (MD) technique. It smoothly incorporates the estimation of residual RCM and combines the subaperture phase error (SPE) estimated by PGA in a very precise manner. The methodology and accuracy of PGA-MD are investigated in detail. Experimental results indicate the effectiveness of PGA-MD in both the spotlight and the stripmap modes.
A Semi-Open Loop GNSS Carrier Tracking Algorithm for Monitoring Strong Equatorial Scintillation. Strong equatorial ionospheric scintillation of radio signals is often associated with simultaneous deep amplitude fading and rapid random carrier phase fluctuations. It poses a challenge for satellite navigation receiver carrier phase tracking loop operation. This paper presents a semi-open loop algorithm that utilizes the known position of a stationary receiver and satellite orbit information to ...
Measurement of the Ionospheric Scintillation Parameter $C_{k}L$ From SAR Images of Clutter. Space-based synthetic aperture radar (SAR) can be affected by the ionosphere, particularly at L-band and below. A technique is described that exploits the reduction in SAR image contrast to measure the strength of ionospheric turbulence parameter CkL. The theory describing the effect of the ionosphere on the SAR point spread function (PSF) and the consequent effect on clutter is reviewed and exten...
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize. An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees' swarming around their hive is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used for optimizing multivariable functions and the results produced by ABC, Genetic Algorithm (GA), Particle Swarm Algorithm (PSO) and Particle Swarm Inspired Evolutionary Algorithm (PS-EA) have been compared. The results showed that ABC outperforms the other algorithms.
Toward Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions The ever-increasing number of resource-constrained machine-type communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced mobile broadband (eMBB), massive machine type communications (mMTCs), and ultra-reliable and low latency communications (URLLCs), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include quality of service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead, and radio access network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy random access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions toward addressing RAN congestion problem, and then identify potential advantages, challenges, and use cases for the applications of emerging machine learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula> -learning approach in the mMTC scenario along with the recent advances toward enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions.
Priced Oblivious Transfer: How to Sell Digital Goods We consider the question of protecting the privacy of customers buying digital goods. More specifically, our goal is to allow a buyer to purchase digital goods from a vendor without letting the vendor learn what, and to the extent possible also when and how much, it is buying. We propose solutions which allow the buyer, after making an initial deposit, to engage in an unlimited number of priced oblivious-transfer protocols, satisfying the following requirements: As long as the buyer's balance contains sufficient funds, it will successfully retrieve the selected item and its balance will be debited by the item's price. However, the buyer should be unable to retrieve an item whose cost exceeds its remaining balance. The vendor should learn nothing except what must inevitably be learned, namely, the amount of interaction and the initial deposit amount (which imply upper bounds on the quantity and total price of all information obtained by the buyer). In particular, the vendor should be unable to learn what the buyer's current balance is or when it actually runs out of its funds. The technical tools we develop, in the process of solving this problem, seem to be of independent interest. In particular, we present the first one-round (two-pass) protocol for oblivious transfer that does not rely on the random oracle model (a very similar protocol was independently proposed by Naor and Pinkas [21]). This protocol is a special case of a more general "conditional disclosure" methodology, which extends a previous approach from [11] and adapts it to the 2-party setting.
Cognitive Cars: A New Frontier for ADAS Research This paper provides a survey of recent works on cognitive cars with a focus on driver-oriented intelligent vehicle motion control. The main objective here is to clarify the goals and guidelines for future development in the area of advanced driver-assistance systems (ADASs). Two major research directions are investigated and discussed in detail: 1) stimuli–decisions–actions, which focuses on the driver side, and 2) perception enhancement–action-suggestion–function-delegation, which emphasizes the ADAS side. This paper addresses the important achievements and major difficulties of each direction and discusses how to combine the two directions into a single integrated system to obtain safety and comfort while driving. Other related topics, including driver training and infrastructure design, are also studied.
Wireless Networks with RF Energy Harvesting: A Contemporary Survey Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirements. In this paper, we present a comprehensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RFEHNs according to the network types, i.e., single-hop networks, multi-antenna networks, relay networks, and cognitive radio networks. Finally, we envision some open research directions.
A multi-objective and PSO based energy efficient path design for mobile sink in wireless sensor networks. Data collection through mobile sink (MS) in wireless sensor networks (WSNs) is an effective solution to the hot-spot or sink-hole problem caused by multi-hop routing using the static sink. Rendezvous point (RP) based MS path design is a common and popular technique used in this regard. However, design of the optimal path is a well-known NP-hard problem. Therefore, an evolutionary approach like multi-objective particle swarm optimization (MOPSO) can prove to be a very promising and reasonable approach to solve the same. In this paper, we first present a Linear Programming formulation for the stated problem and then, propose an MOPSO-based algorithm to design an energy efficient trajectory for the MS. The algorithm is presented with an efficient particle encoding scheme and derivation of a proficient multi-objective fitness function. We use Pareto dominance in MOPSO for obtaining both local and global best guides for each particle. We carry out rigorous simulation experiments on the proposed algorithm and compare the results with two existing algorithms namely, tree cluster based data gathering algorithm (TCBDGA) and energy aware sink relocation (EASR). The results demonstrate that the proposed algorithm performs better than both of them in terms of various performance metrics. The results are also validated through the statistical test, analysis of variance (ANOVA) and its least significant difference (LSD) post hoc analysis.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Neural Network-Based Event-Triggered State Feedback Control of Nonlinear Continuous-Time Systems. This paper presents a novel approximation-based event-triggered control of multi-input multi-output uncertain nonlinear continuous-time systems in affine form. The controller is approximated using a linearly parameterized neural network (NN) in the context of event-based sampling. After revisiting the NN approximation property in the context of event-based sampling, an event-triggered condition is proposed using the Lyapunov technique to reduce the network resource utilization and to generate the required number of events for the NN approximation. In addition, a novel weight update law for aperiodic tuning of the NN weights at triggered instants is proposed to relax the knowledge of complete system dynamics and to reduce the computation when compared with the traditional NN-based control. Nonetheless, a nonzero positive lower bound for the inter-event times is guaranteed to avoid the accumulation of events or Zeno behavior. For analyzing the stability, the event-triggered system is modeled as a nonlinear impulsive dynamical system and the Lyapunov technique is used to show local ultimate boundedness of all signals. Furthermore, in order to overcome the unnecessary triggered events when the system states are inside the ultimate bound, a dead-zone operator is used to reset the event-trigger errors to zero. Finally, the analytical design is substantiated with numerical results.
A Stability Guaranteed Robust Fault Tolerant Control Design for Vehicle Suspension Systems Subject to Actuator Faults and Disturbances A fault tolerant control approach based on a novel sliding mode method is proposed in this brief for a full vehicle suspension system. The proposed approach aims at retaining system stability in the presence of model uncertainties, actuator faults, parameter variations, and neglected nonlinear effects. The design is based on a realistic model that includes road uncertainties, disturbances, and faults. The design begins by dividing the system into two subsystems: a first subsystem with 3 degrees-of-freedom (DoF) representing the chassis and a second subsystem with 4 DoF representing the wheels, electrohydraulic actuators, and effect of road disturbances and actuator faults. Based on the analysis of the system performance, the first subsystem is considered as the internal dynamic of the whole system for control design purposes. The proposed algorithm is implemented in two stages to provide a stability guaranteed approach. A robust optimal sliding mode controller is designed first for the uncertain internal dynamics of the system to mitigate the effect of road disturbances. Then, a robust sliding mode controller is proposed to handle actuator faults and ensure overall stability of the whole system. The proposed approach has been tested on a 7-DoF full car model subject to uncertainties and actuator faults. The results are compared with the ones obtained using approach. The proposed approach optimizes riding comfort and road holding ability even in the presence of actuator faults and parameter variations.
Neural Learning Control of Strict-Feedback Systems Using Disturbance Observer. This paper studies the compound learning control of disturbed uncertain strict-feedback systems. The design is using the dynamic surface control equipped with a novel learning scheme. This paper integrates the recently developed online recorded data-based neural learning with the nonlinear disturbance observer (DOB) to achieve good ``understanding'' of the system uncertainty including unknown dynamics and time-varying disturbance. With the proposed method to show how the neural networks and DOB are cooperating with each other, one indicator is constructed and included into the update law. The closed-loop system stability analysis is rigorously presented. Different kinds of disturbances are considered in a third-order system as simulation examples and the results confirm that the proposed method achieves higher tracking accuracy while the compound estimation is much more precise. The design is applied to the flexible hypersonic flight dynamics and a better tracking performance is obtained.
Distributed Model-Based Event-Triggered Leader–Follower Consensus Control for Linear Continuous-Time Multiagent Systems This article investigates the event-triggered leader–follower consensus control problem for linear continuous-time multiagent systems (MASs). A new consensus protocol and an event-triggered communication (ETC) strategy based on a closed-loop state estimator are designed. The closed-looped state estimator renders us more accurate state estimations, therefore the triggering times can be decreased wh...
Adaptive Fuzzy Backstepping-Based Formation Control of Unmanned Surface Vehicles With Unknown Model Nonlinearity and Actuator Saturation In this article, the formation control of unmanned surface vehicles (USVs) is addressed considering actuator saturation and unknown nonlinear items. The algorithm can be divided into two parts, steering the leader USV to trace along the desired path and steering the follower USV to follow the leader in the desired formation. In the proposed formation control framework, a virtual USV is first constructed so that the leader USV can be guided to the desired path. To solve the input constraint problem, an auxiliary is introduced, and the adaptive fuzzy method is used to estimate unknown nonlinear items in the USV. To maintain the desired formation, the desired velocities of follower USVs are deduced using geometry and Lyapunov stability theories; the stability of the closed-loop system is also proved. Finally, the effectiveness of the proposed approach is demonstrated by the simulation and experimental results.
Asymptotically Stable Adaptive-Optimal Control Algorithm With Saturating Actuators and Relaxed Persistence of Excitation. This paper proposes a control algorithm based on adaptive dynamic programming to solve the infinite-horizon optimal control problem for known deterministic nonlinear systems with saturating actuators and nonquadratic cost functionals. The algorithm is based on an actor/critic framework, where a critic neural network (NN) is used to learn the optimal cost, and an actor NN is used to learn the optim...
A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems An online adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem for continuous-time uncertain nonlinear systems. A novel actor–critic–identifier (ACI) is proposed to approximate the Hamilton–Jacobi–Bellman equation using three neural network (NN) structures—actor and critic NNs approximate the optimal control and the optimal value function, respectively, and a robust dynamic neural network identifier asymptotically approximates the uncertain system dynamics. An advantage of using the ACI architecture is that learning by the actor, critic, and identifier is continuous and simultaneous, without requiring knowledge of system drift dynamics. Convergence of the algorithm is analyzed using Lyapunov-based adaptive control methods. A persistence of excitation condition is required to guarantee exponential convergence to a bounded region in the neighborhood of the optimal control and uniformly ultimately bounded (UUB) stability of the closed-loop system. Simulation results demonstrate the performance of the actor–critic–identifier method for approximate optimal control.
Distributed Control of Spatially Reversible Interconnected Systems with Boundary Conditions We present a class of spatially interconnected systems with boundary conditions that have close links with their spatially invariant extensions. In particular, well-posedness, stability, and performance of the extension imply the same characteristics for the actual, finite extent system. In turn, existing synthesis methods for control of spatially invariant systems can be extended to this class. The relation between the two kinds of systems is proved using ideas based on the "method of images" of partial differential equations theory and uses symmetry properties of the interconnection as a key tool.
Event-Triggered Finite-Time Control for Networked Switched Linear Systems With Asynchronous Switching. This paper is concerned with the event-triggered finite-time control problem for networked switched linear systems by using an asynchronous switching scheme. Not only the problem of finite-time boundedness, but also the problem of input-output finite-time stability is considered in this paper. Compared with the existing event-triggered results of the switched systems, a new type of event-triggered...
A Game-Theoretical Approach for User Allocation in Edge Computing Environment Edge Computing provides mobile and Internet-of-Things (IoT) app vendors with a new distributed computing paradigm which allows an app vendor to deploy its app at hired edge servers distributed near app users at the edge of the cloud. This way, app users can be allocated to hired edge servers nearby to minimize network latency and energy consumption. A cost-effective edge user allocation (EUA) requires maximum app users to be served with minimum overall system cost. Finding a centralized optimal solution to this EUA problem is NP-hard. Thus, we propose EUAGame, a game-theoretic approach that formulates the EUA problem as a potential game. We analyze the game and show that it admits a Nash equilibrium. Then, we design a novel decentralized algorithm for finding a Nash equilibrium in the game as a solution to the EUA problem. The performance of this algorithm is theoretically analyzed and experimentally evaluated. The results show that the EUA problem can be solved effectively and efficiently.
A new CAD mesh segmentation method, based on curvature tensor analysis This paper presents a new and efficient algorithm for the decomposition of 3D arbitrary triangle meshes and particularly optimized triangulated CAD meshes. The algorithm is based on the curvature tensor field analysis and presents two distinct complementary steps: a region based segmentation, which is an improvement of that presented by Lavoue et al. [Lavoue G, Dupont F, Baskurt A. Constant curvature region decomposition of 3D-meshes by a mixed approach vertex-triangle, J WSCG 2004;12(2):245-52] and which decomposes the object into near constant curvature patches, and a boundary rectification based on curvature tensor directions, which corrects boundaries by suppressing their artefacts or discontinuities. Experiments conducted on various models including both CAD and natural objects, show satisfactory results. Resulting segmented patches, by virtue of their properties (homogeneous curvature, clean boundaries) are particularly adapted to computer graphics tasks like parametric or subdivision surface fitting in an adaptive compression objective.
Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision based problems. However, deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest to develop explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose Grad-CAM++ to provide better visual explanations of CNN model predictions (when compared to Grad-CAM), in terms of better localization of objects as well as explaining occurrences of multiple objects of a class in a single image. We provide a mathematical explanation for the proposed method, Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ indeed provides better visual explanations for a given CNN architecture when compared to Grad-CAM.
Adaptive generation of challenging scenarios for testing and evaluation of autonomous vehicles. •A novel framework for generating test cases for autonomous vehicles is proposed.•Adaptive sampling significantly reduces the number of simulations required.•Adjacency clustering identifies performance boundaries of the system.•Approach successfully applied to complex unmanned underwater vehicle missions.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.039754
0.033333
0.033333
0.033333
0.033333
0.02025
0.009579
0.003417
0.000758
0
0
0
0
0
Harmony search algorithm for solving Sudoku Harmony search (HS) algorithm was applied to solving Sudoku puzzle. The HS is an evolutionary algorithm which mimics musicians' behaviors such as random play, memory-based play, and pitch-adjusted play when they perform improvisation. Sudoku puzzles in this study were formulated as an optimization problem with number-uniqueness penalties. HS could successfully solve the optimization problem after 285 function evaluations, taking 9 seconds. Also, sensitivity analysis of HS parameters was performed to obtain a better idea of algorithm parameter values.
Mobile cloud computing: A survey Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for future work.
Particle swarm optimization with varying bounds Particle Swarm Optimization (PSO) is a stochastic approach that was originally developed to simulate the behavior of birds and was successfully applied to many applications. In the field of evolutionary algorithms, researchers attempted many techniques in order to build probabilistic models that capture the search space properties and use these models to generate new individuals. Two approaches have been recently introduced to incorporate building a probabilistic model of the promising regions in the search space into PSO. This work proposes a new method for building this model into PSO, which borrows concepts from population-based incremental learning (PBIL). The proposed method is implemented and compared to existing approaches using a suite of well-known benchmark optimization functions.
Evolutionary Fuzzy Control and Navigation for Two Wheeled Robots Cooperatively Carrying an Object in Unknown Environments This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.
Design of robust fuzzy fault detection filter for polynomial fuzzy systems with new finite frequency specifications This paper investigates the problem of fault detection filter design for discrete-time polynomial fuzzy systems with faults and unknown disturbances. The frequency ranges of the faults and the disturbances are assumed to be known beforehand and to reside in low, middle or high frequency ranges. Thus, the proposed filter is designed in the finite frequency range to overcome the conservatism generated by those designed in the full frequency domain. Being of polynomial fuzzy structure, the proposed filter combines the H−/H∞ performances in order to ensure the best robustness to the disturbance and the best sensitivity to the fault. Design conditions are derived in Sum Of Squares formulations that can be easily solved via available software tools. Two illustrative examples are introduced to demonstrate the effectiveness of the proposed method and a comparative study with LMI method is also provided.
Evolutionary Wall-Following Hexapod Robot Using Advanced Multiobjective Continuous Ant Colony Optimized Fuzzy Controller. This paper proposes an evolutionary wall-following hexapod robot, where a new multiobjective evolutionary fuzzy control approach is proposed to control both walking orientation and speed of a hexapod robot for a wall-following task. According to the measurements of four distance sensors, a fuzzy controller (FC) controls the walking speed of the robot by changing the common swing angles of its six ...
Fuzzy Logic in Dynamic Parameter Adaptation of Harmony Search Optimization for Benchmark Functions and Fuzzy Controllers. Nowadays the use of fuzzy logic has been increasing in popularity, and this is mainly due to the inference mechanism that allows simulating human reasoning in knowledge-based systems. The main contribution of this work is using the concepts of fuzzy logic in a method for dynamically adapting the main parameters of the harmony search algorithm during execution. Dynamical adaptation of parameters in metaheuristics has been shown to improve performance and accuracy in a wide range of applications. For this reason, we propose and approach for fuzzy adaptation of parameters in harmony search. Two case studies are considered for testing the proposed approach, the optimization of mathematical functions, which are unimodal, multimodal, hybrid, and composite functions and a control problem without noise and when noise is considered. A statistical comparison between the harmony search algorithm and the fuzzy harmony search algorithm is presented to verify the advantages of the proposed approach.
Finite-Time Input-to-State Stability and Applications to Finite-Time Control Design This paper extends the well-known concept, Sontag's input-to-state stability (ISS), to finite-time control problems. In other words, a new concept, finite-time input-to-state stability (FTISS), is proposed and then is applied to both the analysis of finite-time stability and the design of finite-time stabilizing feedback laws of control systems. With finite-time stability, nonsmoothness has to be considered, and serious technical challenges arise in the design of finite-time controllers and the stability analysis of the closed-loop system. It is found that FTISS plays an important role as the conventional ISS in the context of asymptotic stability analysis and smooth feedback stabilization. Moreover, a robust adaptive controller is proposed to handle nonlinear systems with parametric and dynamic uncertainties by virtue of FTISS and related arguments.
Adam: A Method for Stochastic Optimization. We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
Multiple Lyapunov functions and other analysis tools for switched and hybrid systems In this paper, we introduce some analysis tools for switched and hybrid systems. We first present work on stability analysis. We introduce multiple Lyapunov functions as a tool for analyzing Lyapunov stability and use iterated function systems (IFS) theory as a tool for Lagrange stability. We also discuss the case where the switched systems are indexed by an arbitrary compact set. Finally, we extend Bendixson's theorem to the case of Lipschitz continuous vector fields, allowing limit cycle analysis of a class of "continuous switched" systems.
Learning to Predict Driver Route and Destination Intent For many people, driving is a routine activity where people drive to the same destinations using the same routes on a regular basis. Many drivers, for example, will drive to and from work along a small set of routes, at about the same time every day of the working week. Similarly, although a person may shop on different days or at different times, they will often visit the same grocery store(s). In this paper, we present a novel approach to predicting driver intent that exploits the predictable nature of everyday driving. Our approach predicts a driver's intended route and destination through the use of a probabilistic model learned from observation of their driving habits. We show that by using a low-cost GPS sensor and a map database, it is possible to build a hidden Markov model (HMM) of the routes and destinations used by the driver. Furthermore, we show that this model can be used to make accurate predictions of the driver's destination and route through on-line observation of their GPS position during the trip. We present a thorough evaluation of our approach using a corpus of almost a month of real, everyday driving. Our results demonstrate the effectiveness of the approach, achieving approximately 98% accuracy in most cases. Such high performance suggests that the method can be harnessed for improved safety monitoring, route planning taking into account traffic density, and better trip duration prediction
Software-Defined Networking: A Comprehensive Survey The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms - with a focus on aspects such as resiliency, scalability, performance, security, and dependability - as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.
Deep Learning in Mobile and Wireless Networking: A Survey. The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.24
0.24
0.24
0.24
0.24
0.24
0.12
0.013333
0
0
0
0
0
0
Wireless Federated Learning with Local Differential Privacy In this paper, we study the problem of federated learning (FL) over a wireless channel, modeled by a Gaussian multiple access channel (MAC), subject to local differential privacy (LDP) constraints. We show that the superposition nature of the wireless channel provides a dual benefit of bandwidth efficient gradient aggregation, in conjunction with strong LDP guarantees for the users. We propose a private wireless gradient aggregation scheme, which shows that when aggregating gradients from K users, the privacy leakage per user scales as $\\mathcal{O}\\left( {\\frac{1}{{\\sqrt K }}} \\right)$ compared to orthogonal transmission in which the privacy leakage scales as a constant. We also present analysis for the convergence rate of the proposed private FL aggregation algorithm and study the tradeoffs between wireless resources, convergence, and privacy.
A Hybrid Approach to Privacy-Preserving Federated Learning. Federated learning facilitates the collaborative training of models without the sharing of raw data. However, recent attacks demonstrate that simply maintaining data locality during training processes does not provide sufficient privacy guarantees. Rather, we need a federated learning system capable of preventing inference over both the messages exchanged during training and the final trained model while ensuring the resulting model also has acceptable predictive accuracy. Existing federated learning approaches either use secure multiparty computation (SMC) which is vulnerable to inference or differential privacy which can lead to low accuracy given a large number of parties with relatively small amounts of data each. In this paper, we present an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs. Combining differential privacy with secure multiparty computation enables us to reduce the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust. Our system is therefore a scalable approach that protects against inference threats and produces models with high accuracy. Additionally, our system can be used to train a variety of machine learning models, which we validate with experimental results on 3 different machine learning algorithms. Our experiments demonstrate that our approach out-performs state of the art solutions.
Broadband Analog Aggregation for Low-Latency Federated Edge Learning To leverage rich data distributed at the network edge, a new machine-learning paradigm, called edge learning, has emerged where learning algorithms are deployed at the edge for providing intelligent services to mobile users. While computing speeds are advancing rapidly, the communication latency is becoming the bottleneck of fast edge learning. To address this issue, this work is focused on designing a low-latency multi-access scheme for edge learning. To this end, we consider a popular privacy-preserving framework, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">federated edge learning</italic> (FEEL), where a global AI-model at an edge-server is updated by aggregating (averaging) local models trained at edge devices. It is proposed that the updates simultaneously transmitted by devices over broadband channels should be analog aggregated “over-the-air” by exploiting the waveform-superposition property of a multi-access channel. Such <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">broadband analog aggregation</italic> (BAA) results in dramatical communication-latency reduction compared with the conventional orthogonal access (i.e., OFDMA). In this work, the effects of BAA on learning performance are quantified targeting a single-cell random network. First, we derive two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization. The power control (“truncated channel inversion”) required for BAA results in a tradeoff between the update-reliability [as measured by the receive <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">signal-to-noise ratio</italic> (SNR)] and the expected update-truncation ratio. Consider the scheduling of cell-interior devices to constrain path loss. This gives rise to the other tradeoff between the receive SNR and fraction of data exploited in learning. Next, the latency-reduction ratio of the proposed BAA with respect to the traditional OFDMA scheme is proved to scale almost linearly with the device population. Experiments based on a neural network and a real dataset are conducted for corroborating the theoretical results.
Federated Learning via Over-the-Air Computation The stringent requirements for low-latency and privacy of the emerging high-stake applications with intelligent devices such as drones and smart vehicles make the cloud computing inapplicable in these scenarios. Instead, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">edge machine learning</italic> becomes increasingly attractive for performing training and inference directly at network edges without sending data to a centralized data center. This stimulates a nascent field termed as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">federated learning</italic> for training a machine learning model on computation, storage, energy and bandwidth limited mobile devices in a distributed manner. To preserve data privacy and address the issues of unbalanced and non-IID data points across different devices, the federated averaging algorithm has been proposed for global model aggregation by computing the weighted average of locally updated model at each selected device. However, the limited communication bandwidth becomes the main bottleneck for aggregating the locally computed updates. We thus propose a novel <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">over-the-air computation</italic> based approach for fast global model aggregation via exploring the superposition property of a wireless multiple-access channel. This is achieved by joint device selection and beamforming design, which is modeled as a sparse and low-rank optimization problem to support efficient algorithms design. To achieve this goal, we provide a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect the fixed-rank constraint in the procedure of device selection. A DC algorithm is further developed to solve the resulting DC program with global convergence guarantees. The algorithmic advantages and admirable performance of the proposed methodologies are demonstrated through extensive numerical results.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
A Tutorial On Visual Servo Control This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
Precomputing Oblivious Transfer Alice and Bob are too untrusting of computer scientists to let their privacy depend on unproven assumptions such as the existence of one-way functions. Firm believers in Schrödinger and Heisenberg, they might accept a quantum OT device, but IBM’s prototype is not yet portable. Instead, as part of their prenuptial agreement, they decide to visit IBM and perform some OT’s in advance, so that any later divorces, coin-flipping or other important interactions can be done more conveniently, without needing expensive third parties. Unfortunately, OT can’t be done in advance in a direct way, because even though Bob might not know what bit Alice will later send (even if she first sends a random bit and later corrects it, for example), he would already know which bit or bits he will receive. We address the problem of precomputing oblivious transfer and show that OT can be precomputed at a cost of Θ(κ) prior transfers (a tight bound). In contrast, we show that variants of OT, such as one-out-of-two OT, can be precomputed using only one prior transfer. Finally, we show that all variants can be reduced to a single precomputed one-out-of-two oblivious transfer.
Paraphrasing for automatic evaluation This paper studies the impact of paraphrases on the accuracy of automatic evaluation. Given a reference sentence and a machine-generated sentence, we seek to find a paraphrase of the reference sentence that is closer in wording to the machine output than the original reference. We apply our paraphrasing method in the context of machine translation evaluation. Our experiments show that the use of a paraphrased synthetic reference refines the accuracy of automatic evaluation. We also found a strong connection between the quality of automatic paraphrases as judged by humans and their contribution to automatic evaluation.
Large System Analysis of Cooperative Multi-Cell Downlink Transmission via Regularized Channel Inversion with Imperfect CSIT In this paper, we analyze the ergodic sum-rate of a multi-cell downlink system with base station (BS) cooperation using regularized zero-forcing (RZF) precoding. Our model assumes that the channels between BSs and users have independent spatial correlations and imperfect channel state information at the transmitter (CSIT) is available. Our derivations are based on large dimensional random matrix theory (RMT) under the assumption that the numbers of antennas at the BS and users approach to infinity with some fixed ratios. In particular, a deterministic equivalent expression of the ergodic sum-rate is obtained and is instrumental in getting insight about the joint operations of BSs, which leads to an efficient method to find the asymptotic-optimal regularization parameter for the RZF. In another application, we use the deterministic channel rate to study the optimal feedback bit allocation among the BSs for maximizing the ergodic sum-rate, subject to a total number of feedback bits constraint. By inspecting the properties of the allocation, we further propose a scheme to greatly reduce the search space for optimization. Simulation results demonstrate that the ergodic sum-rates achievable by a subspace search provides comparable results to those by an exhaustive search under various typical settings.
A Model Predictive Control Approach to Microgrid Operation Optimization. Microgrids are subsystems of the distribution grid, which comprises generation capacities, storage devices, and controllable loads, operating as a single controllable system either connected or isolated from the utility grid. In this paper, we present a study on applying a model predictive control approach to the problem of efficiently optimizing microgrid operations while satisfying a time-varying request and operation constraints. The overall problem is formulated using mixed-integer linear programming (MILP), which can be solved in an efficient way by using commercial solvers without resorting to complex heuristics or decompositions techniques. Then, the MILP formulation leads to significant improvements in solution quality and computational burden. A case study of a microgrid is employed to assess the performance of the online optimization-based control strategy and the simulation results are discussed. The method is applied to an experimental microgrid located in Athens, Greece. The experimental results show the feasibility and the effectiveness of the proposed approach.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.11
0.03
0.003333
0.003125
0
0
0
0
0
0
0
0
0
0
Circular formation flight control for unmanned aerial vehicles with directed network and external disturbance This paper proposes a new distributed formation flight protocol for unmanned aerial vehicles ( UAVs ) to perform coordinated circular tracking around a set of circles on a target sphere. Different from the previous results limited in bidirectional networks and disturbance-free motions, this paper handles the circular formation flight control problem with both directed network and spatiotemporal disturbance with the knowledge of its upper bound. Distinguishing from the design of a common Lyapunov function for bidirectional cases, we separately design the control for the circular tracking subsystem and the formation keeping subsystem with the circular tracking error as input. Then the whole control system is regarded as a cascade connection of these two subsystems, which is proved to be stable by input-to-state stability ( ISS ) theory. For the purpose of encountering the external disturbance, the backstepping technology is introduced to design the control inputs of each UAV pointing to North and Down along the special sphere ( say, the circular tracking control algorithm ) with the help of the switching function. Meanwhile, the distributed linear consensus protocol integrated with anther switching anti-interference item is developed to construct the control input of each UAV pointing to east along the special sphere ( say, the formation keeping control law ) for formation keeping. The validity of the proposed control law is proved both in the rigorous theory and through numerical simulations.
Distributed Containment Control for Multiple Unknown Second-Order Nonlinear Systems With Application to Networked Lagrangian Systems. In this paper, we consider the distributed containment control problem for multiagent systems with unknown nonlinear dynamics. More specifically, we focus on multiple second-order nonlinear systems and networked Lagrangian systems. We first study the distributed containment control problem for multiple second-order nonlinear systems with multiple dynamic leaders in the presence of unknown nonlinearities and external disturbances under a general directed graph that characterizes the interaction among the leaders and the followers. A distributed adaptive control algorithm with an adaptive gain design based on the approximation capability of neural networks is proposed. We present a necessary and sufficient condition on the directed graph such that the containment error can be reduced as small as desired. As a byproduct, the leaderless consensus problem is solved with asymptotical convergence. Because relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements, we then propose a distributed containment control algorithm without using neighbors' velocity information. A two-step Lyapunov-based method is used to study the convergence of the closed-loop system. Next, we apply the ideas to deal with the containment control problem for networked unknown Lagrangian systems under a general directed graph. All the proposed algorithms are distributed and can be implemented using only local measurements in the absence of communication. Finally, simulation examples are provided to show the effectiveness of the proposed control algorithms.
Fully distributed containment control of high-order multi-agent systems with nonlinear dynamics. In this paper, distributed containment control problems for high-order multi-agent systems with nonlinear dynamics are investigated under directed communication topology. The states of the leaders are only available to a subset of the followers and the inputs of the leaders are possibly nonzero and time varying. Distributed adaptive nonlinear protocol is proposed based only on the relative state information, under which the states of the followers converge to the dynamic convex hull spanned by those of the leaders. As the special case with only one dynamic leader, leader–follower consensus problem is also solved with the proposed protocol. The adaptive protocol here is independent of the eigenvalues of the Laplacian matrix, which means the protocol can be implemented by each agent in a fully distributed fashion. A simulation example is provided to illustrate the theoretical results.
Output Containment Control of Linear Heterogeneous Multi-Agent Systems Using Internal Model Principle. This paper studies the output containment control of linear heterogeneous multi-agent systems, where the system dynamics and even the state dimensions can generally be different. Since the states can have different dimensions, standard results from state containment control do not apply. Therefore, the control objective is to guarantee the convergence of the output of each follower to the dynamic ...
Finite-Time Consensus Tracking Neural Network FTC of Multi-Agent Systems The finite-time consensus fault-tolerant control (FTC) tracking problem is studied for the nonlinear multi-agent systems (MASs) in the nonstrict feedback form. The MASs are subject to unknown symmetric output dead zones, actuator bias and gain faults, and unknown control coefficients. According to the properties of the neural network (NN), the unstructured uncertainties problem is solved. The Nussbaum function is used to address the output dead zones and unknown control directions problems. By introducing an arbitrarily small positive number, the “singularity” problem caused by combining the finite-time control and backstepping design is solved. According to the backstepping design and Lyapunov stability theory, a finite-time adaptive NN FTC controller is obtained, which guarantees that the tracking error converges to a small neighborhood of zero in a finite time, and all signals in the closed-loop system are bounded. Finally, the effectiveness of the proposed method is illustrated via a physical example.
Reinforcement Learning-based control using Q-learning and gravitational search algorithm with experimental validation on a nonlinear servo system •A combination of Deep Q-Learning algorithm and metaheuristic GSA is offered.•GSA initializes the weights and the biases of the neural networks.•A comparison with classical random, metaheuristic PSO and GWO is carried out.•The validation is done on real-time nonlinear servo system position control.•The drawbacks of randomly initialized neural networks are mitigated.
Distributed adaptive containment control of uncertain nonlinear multi-agent systems in strict-feedback form. This paper presents a distributed containment control approach for uncertain nonlinear strict-feedback systems with multiple dynamic leaders under a directed graph topology where the leaders are neighbors of only a subset of the followers. The strict-feedback followers with nonparametric uncertainties are considered and the local adaptive dynamic surface controller for each follower is designed using only neighbors’ information to guarantee that all followers converge to the dynamic convex hull spanned by the dynamic leaders where the derivatives of leader signals are not available to implement controllers, i.e., the position information of leaders is only required. The function approximation technique using neural networks is employed to estimate nonlinear uncertainty terms derived from the controller design procedure for the followers. It is shown that the containment control errors converge to an adjustable neighborhood of the origin.
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Mobile Edge Computing: A Survey. Mobile edge computing (MEC) is an emergent architecture where cloud computing services are extended to the edge of networks leveraging mobile base stations. As a promising edge technology, it can be applied to mobile, wireless, and wireline scenarios, using software and hardware platforms, located at the network edge in the vicinity of end-users. MEC provides seamless integration of multiple appli...
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
An evaluation of direct attacks using fake fingers generated from ISO templates This work reports a vulnerability evaluation of a highly competitive ISO matcher to direct attacks carried out with fake fingers generated from ISO templates. Experiments are carried out on a fingerprint database acquired in a real-life scenario and show that the evaluated system is highly vulnerable to the proposed attack scheme, granting access in over 75% of the attempts (for a high-security operating point). Thus, the study disproves the popular belief of minutiae templates non-reversibility and raises a key vulnerability issue in the use of non-encrypted standard templates. (This article is an extended version of Galbally et al., 2008, which was awarded with the IBM Best Student Paper Award in the track of Biometrics at ICPR 2008).
Collaborative Mobile Charging The limited battery capacity of sensor nodes has become one of the most critical impediments that stunt the deployment of wireless sensor networks (WSNs). Recent breakthroughs in wireless energy transfer and rechargeable lithium batteries provide a promising alternative to power WSNs: mobile vehicles/robots carrying high volume batteries serve as mobile chargers to periodically deliver energy to sensor nodes. In this paper, we consider how to schedule multiple mobile chargers to optimize energy usage effectiveness, such that every sensor will not run out of energy. We introduce a novel charging paradigm, collaborative mobile charging, where mobile chargers are allowed to intentionally transfer energy between themselves. To provide some intuitive insights into the problem structure, we first consider a scenario that satisfies three conditions, and propose a scheduling algorithm, PushWait, which is proven to be optimal and can cover a one-dimensional WSN of infinite length. Then, we remove the conditions one by one, investigating chargers' scheduling in a series of scenarios ranging from the most restricted one to a general 2D WSN. Through theoretical analysis and simulations, we demonstrate the advantages of the proposed algorithms in energy usage effectiveness and charging coverage.
Distributed Kalman consensus filter with event-triggered communication: Formulation and stability analysis. •The problem of distributed state estimation in sensor networks with event-triggered communication schedules on both sensor-to-estimator channel and estimator-to-estimator channel is studied.•An event-triggered KCF is designed by deriving the optimal Kalman gain matrix which minimizes the mean squared error.•A computational scalable form of the proposed filter is presented by some approximations.•An appropriate choice of the consensus gain matrix is provided to ensure the stochastic stability of the proposed filter.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
0
Adaptive Neural Network Finite-Time Dynamic Surface Control for Nonlinear Systems This article addresses the problem of finite-time neural network (NN) adaptive dynamic surface control (DSC) design for a class of single-input single-output (SISO) nonlinear systems. Such designs adopt NNs to approximate unknown continuous system functions. To avoid the “explosion of complexity” problem, a novel nonlinear filter is developed in control design. Under the framework of adaptive back...
Distributed Containment Control for Multiple Unknown Second-Order Nonlinear Systems With Application to Networked Lagrangian Systems. In this paper, we consider the distributed containment control problem for multiagent systems with unknown nonlinear dynamics. More specifically, we focus on multiple second-order nonlinear systems and networked Lagrangian systems. We first study the distributed containment control problem for multiple second-order nonlinear systems with multiple dynamic leaders in the presence of unknown nonlinearities and external disturbances under a general directed graph that characterizes the interaction among the leaders and the followers. A distributed adaptive control algorithm with an adaptive gain design based on the approximation capability of neural networks is proposed. We present a necessary and sufficient condition on the directed graph such that the containment error can be reduced as small as desired. As a byproduct, the leaderless consensus problem is solved with asymptotical convergence. Because relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements, we then propose a distributed containment control algorithm without using neighbors' velocity information. A two-step Lyapunov-based method is used to study the convergence of the closed-loop system. Next, we apply the ideas to deal with the containment control problem for networked unknown Lagrangian systems under a general directed graph. All the proposed algorithms are distributed and can be implemented using only local measurements in the absence of communication. Finally, simulation examples are provided to show the effectiveness of the proposed control algorithms.
Fully distributed containment control of high-order multi-agent systems with nonlinear dynamics. In this paper, distributed containment control problems for high-order multi-agent systems with nonlinear dynamics are investigated under directed communication topology. The states of the leaders are only available to a subset of the followers and the inputs of the leaders are possibly nonzero and time varying. Distributed adaptive nonlinear protocol is proposed based only on the relative state information, under which the states of the followers converge to the dynamic convex hull spanned by those of the leaders. As the special case with only one dynamic leader, leader–follower consensus problem is also solved with the proposed protocol. The adaptive protocol here is independent of the eigenvalues of the Laplacian matrix, which means the protocol can be implemented by each agent in a fully distributed fashion. A simulation example is provided to illustrate the theoretical results.
Output Containment Control of Linear Heterogeneous Multi-Agent Systems Using Internal Model Principle. This paper studies the output containment control of linear heterogeneous multi-agent systems, where the system dynamics and even the state dimensions can generally be different. Since the states can have different dimensions, standard results from state containment control do not apply. Therefore, the control objective is to guarantee the convergence of the output of each follower to the dynamic ...
Finite-Time Consensus Tracking Neural Network FTC of Multi-Agent Systems The finite-time consensus fault-tolerant control (FTC) tracking problem is studied for the nonlinear multi-agent systems (MASs) in the nonstrict feedback form. The MASs are subject to unknown symmetric output dead zones, actuator bias and gain faults, and unknown control coefficients. According to the properties of the neural network (NN), the unstructured uncertainties problem is solved. The Nussbaum function is used to address the output dead zones and unknown control directions problems. By introducing an arbitrarily small positive number, the “singularity” problem caused by combining the finite-time control and backstepping design is solved. According to the backstepping design and Lyapunov stability theory, a finite-time adaptive NN FTC controller is obtained, which guarantees that the tracking error converges to a small neighborhood of zero in a finite time, and all signals in the closed-loop system are bounded. Finally, the effectiveness of the proposed method is illustrated via a physical example.
Reinforcement Learning-based control using Q-learning and gravitational search algorithm with experimental validation on a nonlinear servo system •A combination of Deep Q-Learning algorithm and metaheuristic GSA is offered.•GSA initializes the weights and the biases of the neural networks.•A comparison with classical random, metaheuristic PSO and GWO is carried out.•The validation is done on real-time nonlinear servo system position control.•The drawbacks of randomly initialized neural networks are mitigated.
Distributed adaptive containment control of uncertain nonlinear multi-agent systems in strict-feedback form. This paper presents a distributed containment control approach for uncertain nonlinear strict-feedback systems with multiple dynamic leaders under a directed graph topology where the leaders are neighbors of only a subset of the followers. The strict-feedback followers with nonparametric uncertainties are considered and the local adaptive dynamic surface controller for each follower is designed using only neighbors’ information to guarantee that all followers converge to the dynamic convex hull spanned by the dynamic leaders where the derivatives of leader signals are not available to implement controllers, i.e., the position information of leaders is only required. The function approximation technique using neural networks is employed to estimate nonlinear uncertainty terms derived from the controller design procedure for the followers. It is shown that the containment control errors converge to an adjustable neighborhood of the origin.
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Mobile Edge Computing: A Survey. Mobile edge computing (MEC) is an emergent architecture where cloud computing services are extended to the edge of networks leveraging mobile base stations. As a promising edge technology, it can be applied to mobile, wireless, and wireline scenarios, using software and hardware platforms, located at the network edge in the vicinity of end-users. MEC provides seamless integration of multiple appli...
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
An evaluation of direct attacks using fake fingers generated from ISO templates This work reports a vulnerability evaluation of a highly competitive ISO matcher to direct attacks carried out with fake fingers generated from ISO templates. Experiments are carried out on a fingerprint database acquired in a real-life scenario and show that the evaluated system is highly vulnerable to the proposed attack scheme, granting access in over 75% of the attempts (for a high-security operating point). Thus, the study disproves the popular belief of minutiae templates non-reversibility and raises a key vulnerability issue in the use of non-encrypted standard templates. (This article is an extended version of Galbally et al., 2008, which was awarded with the IBM Best Student Paper Award in the track of Biometrics at ICPR 2008).
Collaborative Mobile Charging The limited battery capacity of sensor nodes has become one of the most critical impediments that stunt the deployment of wireless sensor networks (WSNs). Recent breakthroughs in wireless energy transfer and rechargeable lithium batteries provide a promising alternative to power WSNs: mobile vehicles/robots carrying high volume batteries serve as mobile chargers to periodically deliver energy to sensor nodes. In this paper, we consider how to schedule multiple mobile chargers to optimize energy usage effectiveness, such that every sensor will not run out of energy. We introduce a novel charging paradigm, collaborative mobile charging, where mobile chargers are allowed to intentionally transfer energy between themselves. To provide some intuitive insights into the problem structure, we first consider a scenario that satisfies three conditions, and propose a scheduling algorithm, PushWait, which is proven to be optimal and can cover a one-dimensional WSN of infinite length. Then, we remove the conditions one by one, investigating chargers' scheduling in a series of scenarios ranging from the most restricted one to a general 2D WSN. Through theoretical analysis and simulations, we demonstrate the advantages of the proposed algorithms in energy usage effectiveness and charging coverage.
Distributed Kalman consensus filter with event-triggered communication: Formulation and stability analysis. •The problem of distributed state estimation in sensor networks with event-triggered communication schedules on both sensor-to-estimator channel and estimator-to-estimator channel is studied.•An event-triggered KCF is designed by deriving the optimal Kalman gain matrix which minimizes the mean squared error.•A computational scalable form of the proposed filter is presented by some approximations.•An appropriate choice of the consensus gain matrix is provided to ensure the stochastic stability of the proposed filter.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
0
The Age of Information in Multihop Networks. Information updates in multihop networks such as Internet of Things (IoT) and intelligent transportation systems have received significant recent attention. In this paper, we minimize the age of a single information flow in interference-free multihop networks. When preemption is allowed and the packet transmission times are exponentially distributed, we prove that a preemptive last-generated, first-served (LGFS) policy results in smaller age processes across all nodes in the network than any other causal policy (in a stochastic ordering sense). In addition, for the class of new-better-than-used (NBU) distributions, we show that the non-preemptive LGFS policy is within a constant age gap from the optimum average age. In contrast, our numerical result shows that the preemptive LGFS policy can be very far from the optimum for some NBU transmission time distributions. Finally, when preemption is prohibited and the packet transmission times are arbitrarily distributed, the non-preemptive LGFS policy is shown to minimize the age processes across all nodes in the network among all work-conserving policies (again in a stochastic ordering sense). Interestingly, these results hold under quite general conditions, including 1) arbitrary packet generation and arrival times, and 2) for minimizing both the age processes in stochastic ordering and any non-decreasing functional of the age processes.
A Low-Complexity Analytical Modeling for Cross-Layer Adaptive Error Protection in Video Over WLAN We find a low-complicity and accurate model to solve the problem of optimizing MAC-layer transmission of real-time video over wireless local area networks (WLANs) using cross-layer techniques. The objective in this problem is to obtain the optimal MAC retry limit in order to minimize the total packet loss rate. First, the accuracy of Fluid and M/M/1/K analytical models is examined. Then we derive a closed-form expression for service time in WLAN MAC transmission, and will use this in mathematical formulation of our optimization problem based on M/G/1 model. Subsequently we introduce an approximate and simple formula for MAC-layer service time, which leads to the M/M/1 model. Compared with M/G/1, we particularly show that our M/M/1-based model provides a low-complexity and yet quite accurate means for analyzing MAC transmission process in WLAN. Using our M/M/1 model-based analysis, we derive closed-form formulas for the packet overflow drop rate and optimum retry-limit. These closed-form expressions can be effectively invoked for analyzing adaptive retry-limit algorithms. Simulation results (network simulator-2) will verify the accuracy of our analytical models.
Poisson Arrivals See Time Averages In many stochastic models, particularly in queueing theory, Poisson arrivals both observe see a stochastic process and interact with it. In particular cases and/or under restrictive assumptions it ...
Data Aggregation and Packet Bundling of Uplink Small Packets for Monitoring Applications in LTE. In cellular massive machine-type communications, a device can transmit directly to the BS or through an aggregator (intermediate node). While direct device-BS communication has recently been the focus of 5G/3GPP research and standardization efforts, the use of aggregators remains a less explored topic. In this article we analyze the deployment scenarios in which aggregators can perform cellular ac...
Queue Management for Age Sensitive Status Updates We consider a system consisting of a source-destination communication link. At the transmitter of the source there is a buffer that stores packets containing status information. These randomly generated packets should keep the destination timely updated and they can be discarded to avoid wasting network resources for the transmission of stale information. In this setup, we provide an analysis of the age of information (AoI) and peak age of information (PAoI) performance of the system, with and without packet management at the transmission queue of the source node. The analysis indicates the potential performance gains obtained with the use of packet management.
Age of information in a decentralized network of parallel queues with routing and packets losses The paper deals with age of information (AoI) in a network of multiple sources and parallel queues with buffering capabilities, preemption in service and losses in served packets. The queues do not communicate between each other and the packets are dispatched through the queues according to a predefined probabilistic routing. By making use of the stochastic hybrid system (SHS) method, we provide a...
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Revenue-optimal task scheduling and resource management for IoT batch jobs in mobile edge computing With the growing prevalence of Internet of Things (IoT) devices and technology, a burgeoning computing paradigm namely mobile edge computing (MEC) is delicately proposed and designed to accommodate the application requirements of IoT scenario. In this paper, we focus on the problems of dynamic task scheduling and resource management in MEC environment, with the specific objective of achieving the optimal revenue earned by edge service providers. While the majority of task scheduling and resource management algorithms are formulated by an integer programming (IP) problem and solved in a dispreferred NP-hard manner, we innovatively investigate the problem structure and identify a favorable property namely totally unimodular constraints. The totally unimodular property further helps to design an equivalent linear programming (LP) problem which can be efficiently and elegantly solved at polynomial computational complexity. In order to evaluate our proposed approach, we conduct simulations based on real-life IoT dataset to verify the effectiveness and efficiency of our approach.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
A robust medical image watermarking against salt and pepper noise for brain MRI images. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. During the transmission of medical images between hospitals or specialists through the network, the main priority is to protect a patient's documents against any act of tampering by unauthorised individuals. Because of this, there is a need for medical image authentication scheme to enable proper diagnosis on patient. In addition, medical images are also susceptible to salt and pepper impulse noise through the transmission in communication channels. This noise may also be intentionally used by the invaders to corrupt the embedded watermarks inside the medical images. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this work addresses the issue of designing a new watermarking method that can withstand high density of salt and pepper noise for brain MRI images. For this purpose, combination of a spatial domain watermarking method, channel coding and noise filtering schemes are used. The region of non-interest (RONI) of MRI images from five different databases are used as embedding area and electronic patient record (EPR) is considered as embedded data. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER).
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
TGNet: Learning to Rank Nodes in Temporal Graphs. Node ranking in temporal networks are often impacted by heterogeneous context from node content, temporal, and structural dimensions. This paper introduces TGNet , a deep learning framework for node ranking in heterogeneous temporal graphs. TGNet utilizes a variant of Recurrent Neural Network to adapt context evolution and extract context features for nodes. It incorporates a novel influence network to dynamically estimate temporal and structural influence among nodes over time. To cope with label sparsity, it integrates graph smoothness constraints as a weak form of supervision. We show that the application of TGNet is feasible for large-scale networks by developing efficient learning and inference algorithms with optimization techniques. Using real-life data, we experimentally verify the effectiveness and efficiency of TGNet techniques. We also show that TGNet yields intuitive explanations for applications such as alert detection and academic impact ranking, as verified by our case study.
A Private and Efficient Mechanism for Data Uploading in Smart Cyber-Physical Systems. To provide fine-grained access to different dimensions of the physical world, the data uploading in smart cyber-physical systems suffers novel challenges on both energy conservation and privacy preservation. It is always critical for participants to consume as little energy as possible for data uploading. However, simply pursuing energy efficiency may lead to extreme disclosure of private informat...
Exploring Data Validity in Transportation Systems for Smart Cities. Efficient urban transportation systems are widely accepted as essential infrastructure for smart cities, and they can highly increase a city¿s vitality and convenience for residents. The three core pillars of smart cities can be considered to be data mining technology, IoT, and mobile wireless networks. Enormous data from IoT is stimulating our cities to become smarter than ever before. In transportation systems, data-driven management can dramatically enhance the operating efficiency by providing a clear and insightful image of passengers¿ transportation behavior. In this article, we focus on the data validity problem in a cellular network based transportation data collection system from two aspects: internal time discrepancy and data loss. First, the essence of time discrepancy was analyzed for both automated fare collection (AFC) and automated vehicular location (AVL) systems, and it was found that time discrepancies can be identified and rectified by analyzing passenger origin inference success rate using different time shift values and evolutionary algorithms. Second, the algorithmic framework to handle location data loss and time discrepancy was provided. Third, the spatial distribution characteristics of location data loss events were analyzed, and we discovered that they have a strong and positive relationship with both high passenger volume and shadowing effects in urbanized areas, which can cause severe biases on passenger traffic analysis. Our research has proposed some data-driven methodologies to increase data validity and provided some insights into the influence of IoT level data loss on public transportation systems for smart cities.
Data Linkage in Smart Internet of Things Systems: A Consideration from a Privacy Perspective. Smart IoT systems can integrate knowledge from the surrounding environment, and they are critical components of the next-generation Internet. Such systems usually collect data from various dimensions via numerous devices, and the collected data are usually linkable. This means that they can be combined to derive abundant valuable knowledge. However, the collected data may also be accessed by malic...
Seed-free Graph De-anonymiztiation with Adversarial Learning The huge amount of graph data are published and shared for research and business purposes, which brings great benefit for our society. However, user privacy is badly undermined even though user identity can be anonymized. Graph de-anonymization to identify nodes from an anonymized graph is widely adopted to evaluate users' privacy risks. Most existing de-anonymization methods which are heavily reliant on side information (e.g., seeds, user profiles, community labels) are unrealistic due to the difficulty of collecting this side information. A few graph de-anonymization methods only using structural information, called seed-free methods, have been proposed recently, which mainly take advantage of the local and manual features of nodes while overlooking the global structural information of the graph for de-anonymization. In this paper, a seed-free graph de-anonymization method is proposed, where a deep neural network is adopted to learn features and an adversarial framework is employed for node matching. To be specific, the latent representation of each node is obtained by graph autoencoder. Furthermore, an adversarial learning model is proposed to transform the embedding of the anonymized graph to the latent space of auxiliary graph embedding such that a linear mapping can be derived from a global perspective. Finally, the most similar node pairs in the latent space as the anchor nodes are utilized to launch propagation to de-anonymize all the remaining nodes. The extensive experiments on some real datasets demonstrate that our method is comparative with the seed-based approaches and outperforms the start-of-the-art seed-free method significantly.
GraphSleepNet: Adaptive Spatial-Temporal Graph Convolutional Networks for Sleep Stage Classification
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
Markov games as a framework for multi-agent reinforcement learning In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
Scalable and efficient provable data possession. Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client's (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.
Cognitive Cars: A New Frontier for ADAS Research This paper provides a survey of recent works on cognitive cars with a focus on driver-oriented intelligent vehicle motion control. The main objective here is to clarify the goals and guidelines for future development in the area of advanced driver-assistance systems (ADASs). Two major research directions are investigated and discussed in detail: 1) stimuli–decisions–actions, which focuses on the driver side, and 2) perception enhancement–action-suggestion–function-delegation, which emphasizes the ADAS side. This paper addresses the important achievements and major difficulties of each direction and discusses how to combine the two directions into a single integrated system to obtain safety and comfort while driving. Other related topics, including driver training and infrastructure design, are also studied.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
Bi-Directional Beamformer Training for Dynamic TDD Networks. In dynamic time-division-duplexing networks, the available resources per cell can be freely allocated to either uplink (UL) or downlink (DL) depending on the instantaneous traffic demand. Hence, complicated UL-DL and DL-UL interference scenarios arise due to simultaneous UL and DL data transmission in adjacent cells. In this paper, decentralized iterative beamformer designs are obtained for severa...
Sub-modularity and Antenna Selection in MIMO systems In this paper, we show that the optimal receive antenna subset selection problem for maximizing the mutual information in a point-to-point MIMO system is sub-modular. Consequently, a greedy step-wise optimization approach, where at each step, an antenna that maximizes the incremental gain is added to the existing antenna subset, is guaranteed to be within a (1-1/e)-fraction of the global optimal value independent of all parameters. For a single-antenna-equipped source and destination with multiple relays, we show that the relay antenna selection problem to maximize the mutual information is modular and a greedy step-wise optimization approach leads to an optimal solution.
Achievable Rates of Full-Duplex MIMO Radios in Fast Fading Channels With Imperfect Channel Estimation We study the theoretical performance of two full-duplex multiple-input multiple-output (MIMO) radio systems: a full-duplex bi-directional communication system and a full-duplex relay system. We focus on the effect of a (digitally manageable) residual self-interference due to imperfect channel estimation (with independent and identically distributed (i.i.d.) Gaussian channel estimation error) and transmitter noise. We assume that the instantaneous channel state information (CSI) is not available the transmitters. To maximize the system ergodic mutual information, which is a nonconvex function of power allocation vectors at the nodes, a gradient projection algorithm is developed to optimize the power allocation vectors. This algorithm exploits both spatial and temporal freedoms of the source covariance matrices of the MIMO links between transmitters and receivers to achieve higher sum ergodic mutual information. It is observed through simulations that the full-duplex mode is optimal when the nominal self-interference is low, and the half-duplex mode is optimal when the nominal self-interference is high. In addition to an exact closed-form ergodic mutual information expression, we introduce a much simpler asymptotic closed-form ergodic mutual information expression, which in turn simplifies the computation of the power allocation vectors.
Low-Complexity Beam Allocation for Switched-Beam Based Multiuser Massive MIMO Systems. This paper addresses the beam allocation problem in a switched-beam based massive multiple-input-multiple-output (MIMO) system working at the millimeter wave frequency band, with the target of maximizing the sum data rate. This beam allocation problem can be formulated as a combinatorial optimization problem under two constraints that each user uses at most one beam for its data transmission and each beam serves at most one user. The brute-force search is a straightforward method to solve this optimization problem. However, for a massive MIMO system with a large number of beams $N$ , the brute-force search results in intractable complexity $O(N^{K})$ , where $K$ is the number of users. In this paper, in order to solve the beam allocation problem with affordable complexity, a suboptimal low-complexity beam allocation (LBA) algorithm is developed based on submodular optimization theory, which has been shown to be a powerful tool for solving combinatorial optimization problems. Simulation results show that our proposed LBA algorithm achieves nearly optimal sum data rate with complexity $O(K\\log N)$ . Furthermore, the average service ratio, i.e., the ratio of the number of users being served to the total number of users, is theoretically analyzed and derived as an explicit function of the ratio $N/K$ .
Dynamic TDD Systems for 5G and Beyond: A Survey of Cross-Link Interference Mitigation Dynamic time division duplex (D-TDD) dynamically allocates the transmission directions for traffic adaptation in each cell. D-TDD systems are receiving a lot of attention because they can reduce latency and increase spectrum utilization via flexible and dynamic duplex operation in 5G New Radio (NR). However, the advantages of the D-TDD system are difficult to fully utilize due to the cross-link interference (CLI) arising from the use of different transmission directions between adjacent cells. This paper is a survey of the research from academia and the standardization efforts being undertaken to solve this CLI problem and make the D-TDD system a reality. Specifically, we categorize and present the approaches to mitigating CLI according to operational principles. Furthermore, we present the signaling necessary to apply the CLI mitigation schemes. We also present information-theoretic performance analysis of D-TDD systems in various environments. As topics for future works, we discuss the research challenges and opportunities associated with the CLI mitigation schemes and signaling design in a variety of environments. This survey is recommended for those who are in the initial stage of studying D-TDD systems and those who wish to develop a more feasible D-TDD system as a baseline for reviewing the research flow and standardization trends surrounding D-TDD systems and to identify areas of focus for future works.
Aligned Reverse Frame Structure for Interference Mitigation in Dynamic TDD Systems. The dynamic time division duplex (TDD) system has been proposed as a way to meet today&#39;s asymmetrically and dynamically changing traffic demand. However, this approach causes cross-link interference, because neighboring base stations and user elements transmit in opposite directions. In this paper, we investigate and analyze the characteristics of cross-link interference in dynamic TDD systems. Ba...
Distributed wireless communication system: a new architecture for future public wireless access The distributed wireless communication system (DWCS) is a new architecture for a wireless access system with distributed antennas, distributed processors, and distributed control. With distributed antennas, the system capacity can be expanded through dense frequency reuse, and the transmission power can be greatly decreased. With distributed processors control, the system works like a software or network radio, so different standards can coexist, and the system capacity can be increased by coprocessing of signals to and from multiple antennas.
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Toward Integrating Vehicular Clouds with IoT for Smart City Services Vehicular ad hoc networks, cloud computing, and the Internet of Things are among the emerging technology enablers offering a wide array of new application possibilities in smart urban spaces. These applications consist of smart building automation systems, healthcare monitoring systems, and intelligent and connected transportation, among others. The integration of IoT-based vehicular technologies will enrich services that are eventually going to ignite the proliferation of exciting and even more advanced technological marvels. However, depending on different requirements and design models for networking and architecture, such integration needs the development of newer communication architectures and frameworks. This work proposes a novel framework for architectural and communication design to effectively integrate vehicular networking clouds with IoT, referred to as VCoT, to materialize new applications that provision various IoT services through vehicular clouds. In this article, we particularly put emphasis on smart city applications deployed, operated, and controlled through LoRaWAN-based vehicular networks. LoraWAN, being a new technology, provides efficient and long-range communication possibilities. The article also discusses possible research issues in such an integration including data aggregation, security, privacy, data quality, and network coverage. These issues must be addressed in order to realize the VCoT paradigm deployment, and to provide insights for investors and key stakeholders in VCoT service provisioning. The article presents deep insights for different real-world application scenarios (i.e., smart homes, intelligent traffic light, and smart city) using VCoT for general control and automation along with their associated challenges. It also presents initial insights, through preliminary results, regarding data and resource management in IoT-based resource constrained environments through vehicular clouds.
Multivariate Short-Term Traffic Flow Forecasting Using Time-Series Analysis Existing time-series models that are used for short-term traffic condition forecasting are mostly univariate in nature. Generally, the extension of existing univariate time-series models to a multivariate regime involves huge computational complexities. A different class of time-series models called structural time-series model (STM) (in its multivariate form) has been introduced in this paper to develop a parsimonious and computationally simple multivariate short-term traffic condition forecasting algorithm. The different components of a time-series data set such as trend, seasonal, cyclical, and calendar variations can separately be modeled in STM methodology. A case study at the Dublin, Ireland, city center with serious traffic congestion is performed to illustrate the forecasting strategy. The results indicate that the proposed forecasting algorithm is an effective approach in predicting real-time traffic flow at multiple junctions within an urban transport network.
State resetting for bumpless switching in supervisory control In this paper the realization and implementation of a multi-controller scheme made of a finite set of linear single-input-single-output controllers, possibly having different state dimensions, is studied. The supervisory control framework is considered, namely a minimal parameter dependent realization of the set of controllers such that all controllers share the same state space is used. A specific state resetting strategy based on the behavioral approach to system theory is developed in order to master the transient upon controller switching.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.0525
0.06
0.06
0.06
0.06
0.03
0.000111
0
0
0
0
0
0
0