Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Multiple Lyapunov functions and other analysis tools for switched and hybrid systems In this paper, we introduce some analysis tools for switched and hybrid systems. We first present work on stability analysis. We introduce multiple Lyapunov functions as a tool for analyzing Lyapunov stability and use iterated function systems (IFS) theory as a tool for Lagrange stability. We also discuss the case where the switched systems are indexed by an arbitrary compact set. Finally, we extend Bendixson's theorem to the case of Lipschitz continuous vector fields, allowing limit cycle analysis of a class of "continuous switched" systems.
Input-to-state stability of switched systems and switching adaptive control In this paper we prove that a switched nonlinear system has several useful input-to-state stable (ISS)-type properties under average dwell-time switching signals if each constituent dynamical system is ISS. This extends available results for switched linear systems. We apply our result to stabilization of uncertain nonlinear systems via switching supervisory control, and show that the plant states can be kept bounded in the presence of bounded disturbances when the candidate controllers provide ISS properties with respect to the estimation errors. Detailed illustrative examples are included.
Discrete-Time Switched Linear Systems State Feedback Design With Application to Networked Control This technical note addresses the state feedback switched control design problem for discrete-time switched linear systems. More specifically, the control goal is to jointly design a set of state feedback gains and a state dependent switching function, ensuring H2 and H∞ guaranteed performance. The conditions are based on Lyapunov or Riccati-Metzler inequalities, which allow the derivation of simpler alternative conditions that are expressed as LMIs whenever a scalar variable is fixed. The theoretical results are well adapted to deal with the self-triggered control design problem, where the switching rule is responsible for the scheduling of multiple sampling periods, to be considered in the communication channel in order to improve performance. This method is compared to others from the literature. Examples show the validity of the proposed technique in both contexts, switched and networked control systems.
Robust Model-Based Fault Diagnosis for PEM Fuel Cell Air-Feed System. In this paper, the design of a nonlinear observer-based fault diagnosis approach for polymer electrolyte membrane (PEM) fuel cell air-feed systems is presented, taking into account a fault scenario of sudden air leak in the air supply manifold. Based on a simplified nonlinear model proposed in the literature, a modified super-twisting (ST) sliding mode algorithm is employed to the observer design....
Convexity of the cost functional in an optimal control problem for a class of positive switched systems. This paper deals with the optimal control of a class of positive switched systems. The main feature of this class is that switching alters only the diagonal entries of the dynamic matrix. The control input is represented by the switching signal itself and the optimal control problem is that of minimizing a positive linear combination of the final state variable. First, the switched system is embedded in the class of bilinear systems with control variables living in a simplex, for each time point. The main result is that the cost is convex with respect to the control variables. This ensures that any Pontryagin solution is optimal. Algorithms to find the optimal solution are then presented and an example, taken from a simplified model for HIV mutation mitigation is discussed.
Stabilization of switched continuous-time systems with all modes unstable via dwell time switching Stabilization of switched systems composed fully of unstable subsystems is one of the most challenging problems in the field of switched systems. In this brief paper, a sufficient condition ensuring the asymptotic stability of switched continuous-time systems with all modes unstable is proposed. The main idea is to exploit the stabilization property of switching behaviors to compensate the state divergence made by unstable modes. Then, by using a discretized Lyapunov function approach, a computable sufficient condition for switched linear systems is proposed in the framework of dwell time; it is shown that the time intervals between two successive switching instants are required to be confined by a pair of upper and lower bounds to guarantee the asymptotic stability. Based on derived results, an algorithm is proposed to compute the stability region of admissible dwell time. A numerical example is proposed to illustrate our approach.
Preasymptotic Stability and Homogeneous Approximations of Hybrid Dynamical Systems Hybrid dynamical systems are systems that combine features of continuous-time dynamical systems and discrete-time dynamical systems, and can be modeled by a combination of differential equations or inclusions, difference equations or inclusions, and constraints. Preasymptotic stability is a concept that results from separating the conditions that asymptotic stability places on the behavior of solutions from issues related to existence of solutions. In this paper, techniques for approximating hybrid dynamical systems that generalize classical linearization techniques are proposed. The approximation techniques involve linearization, tangent cones, homogeneous approximations of functions and set-valued mappings, and tangent homogeneous cones, where homogeneity is considered with respect to general dilations. The main results deduce preasymptotic stability of an equilibrium point for a hybrid dynamical system from preasymptotic stability of the equilibrium point for an approximate system. Further results relate the degree of homogeneity of a hybrid system to the Zeno phenomenon that can appear in the solutions of the system.
Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach The Hamilton-Jacobi-Bellman (HJB) equation corresponding to constrained control is formulated using a suitable nonquadratic functional. It is shown that the constrained optimal control law has the largest region of asymptotic stability (RAS). The value function of this HJB equation is solved for by solving for a sequence of cost functions satisfying a sequence of Lyapunov equations (LE). A neural network is used to approximate the cost function associated with each LE using the method of least-squares on a well-defined region of attraction of an initial stabilizing controller. As the order of the neural network is increased, the least-squares solution of the HJB equation converges uniformly to the exact solution of the inherently nonlinear HJB equation associated with the saturating control inputs. The result is a nearly optimal constrained state feedback controller that has been tuned a priori off-line.
Sequence to Sequence Learning with Neural Networks. Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
A General Equilibrium Model for Industries with Price and Service Competition This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers' prices and all retailers' service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty.We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms' service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game.We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service.
A novel ray analogy for enrolment of ear biometrics The ear is a maturing biometric with qualities that give it superiority over other biometrics in a number of situations; in particular the ear is relatively immune to variation due to ageing. Successful ear biometrics rely upon a well enrolled dataset, with ears normalised for position, scale and rotation. We present a novel ear enrolment technique using the image ray transform, based upon an analogy to light rays. The transform is capable of highlighting tubular structures such as the helix of the ear and spectacle frames and, by exploiting the elliptical shape of the helix, can be used as the basis of a method for enrolment for ear biometrics. The presented technique achieves 99.6% success at enrolment across 252 images of the XM2VTS database, displaying a resistance to confusion with hair and spectacles. These results show great potential for enhancing many other already existing enrolment methods through use of the image ray transform at a preprocessing stage.
G2-type SRMPC scheme for synchronous manipulation of two redundant robot arms. In this paper, to remedy the joint-angle drift phenomenon for manipulation of two redundant robot arms, a novel scheme for simultaneous repetitive motion planning and control (SRMPC) at the joint-acceleration level is proposed, which consists of two subschemes. To do so, the performance index of each SRMPC subscheme is derived and designed by employing the gradient dynamics twice, of which a convergence theorem and its proof are presented. In addition, for improving the accuracy of the motion planning and control, position error, and velocity, error feedbacks are incorporated into the forward kinematics equation and analyzed via Zhang neural-dynamics method. Then the two subschemes are simultaneously reformulated as two quadratic programs (QPs), which are finally unified into one QP problem. Furthermore, a piecewise-linear projection equation-based neural network (PLPENN) is used to solve the unified QP problem, which can handle the strictly convex QP problem in an inverse-free manner. More importantly, via such a unified QP formulation and the corresponding PLPENN solver, the synchronism of two redundant robot arms is guaranteed. Finally, two given tasks are fulfilled by 2 three-link and 2 five-link planar robot arms, respectively. Computer-simulation results validate the efficacy and accuracy of the SRMPC scheme and the corresponding PLPENN solver for synchronous manipulation of two redundant robot arms.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.013357
0.012781
0.012781
0.012781
0.006391
0.00307
0.000652
0.000001
0
0
0
0
0
0
Context-Aware Learning for Anomaly Detection with Imbalanced Log Data Logs are used to record runtime states and significant events for a software system. They are widely used for anomaly detection. Logs produced by most of the real-world systems show clear characteristics of imbalanced data because the number of samples in different classes varies sharply. The distribution of imbalanced data makes the anomaly classifier bias toward the majority class, so it is diff...
Squeezed Convolutional Variational AutoEncoder for unsupervised anomaly detection in edge device industrial Internet of Things In this paper, we propose Squeezed Convolutional Variational AutoEncoder (SCVAE) for anomaly detection in time series data for Edge Computing in Industrial Internet of Things (IIoT). The proposed model is applied to labeled time series data from UCI datasets for exact performance evaluation, and applied to real world data for indirect model performance comparison. In addition, by comparing the models before and after applying Fire Modules from SqueezeNet, we show that model size and inference times are reduced while similar levels of performance is maintained.
Time Series Anomaly Detection for Trustworthy Services in Cloud Computing Systems As a powerful architecture for large-scale computation, cloud computing has revolutionized the way that computing infrastructure is abstracted and utilized. Coupled with the challenges caused by Big Data, the rocketing development of cloud computing boosts the complexity of system management and maintenance, resulting in weakened trustworthiness of cloud services. To cope with this problem, a comp...
Deep Learning Based Anomaly Detection in Water Distribution Systems Water distribution system (WDS) is one of the most essential infrastructures all over the world. However, incidents such as natural disasters, accidents and intentional damages are endangering the safety of drinking water. With the advance of sensor technologies, different kinds of sensors are being deployed to monitor operative and quality indicators such as flow rate, pH, turbidity, the amount of chlorine dioxide etc. This brings the possibility to detect anomalies in real time based on the data collected from the sensors and different kinds of methods have been applied to tackle this task such as the traditional machine learning methods (e.g. logistic regression, support vector machine, random forest). Recently, researchers tried to apply the deep learning methods (e.g. RNN, CNN) for WDS anomaly detection but the results are worse than that of the traditional machine learning methods. In this paper, by taking into account the characteristics of the WDS monitoring data, we integrate sequence-to-point learning and data balancing with the deep learning model Long Short-term Memory (LSTM) for the task of anomaly detection in WDSs. With a public data set, we show that by choosing an appropriate input length and balance the training data our approach achieves better F1 score than the state-of-the-art method in the literature.
Explainable Ai: A Review Of Machine Learning Interpretability Methods Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into "black box" approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
The Dangers of Post-hoc Interpretability - Unjustified Counterfactual Explanations. Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justified from unjustified counterfactual examples, leading to less useful explanations.
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Pors: proofs of retrievability for large files In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
On controller initialization in multivariable switching systems We consider a class of switched systems which consists of a linear MIMO and possibly unstable process in feedback interconnection with a multicontroller whose dynamics switch. It is shown how one can achieve significantly better transient performance by selecting the initial condition for every controller when it is inserted into the feedback loop. This initialization is obtained by performing the minimization of a quadratic cost function of the tracking error, controlled output, and control signal. We guarantee input-to-state stability of the closed-loop system when the average number of switches per unit of time is smaller than a specific value. If this is not the case then stability can still be achieved by adding a mild constraint to the optimization. We illustrate the use of our results in the control of a flexible beam actuated in torque. This system is unstable with two poles at the origin and contains several lightly damped modes, which can be easily excited by controller switching.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
Modeling taxi driver anticipatory behavior. As part of a wider behavioral agent-based model that simulates taxi drivers' dynamic passenger-finding behavior under uncertainty, we present a model of strategic behavior of taxi drivers in anticipation of substantial time varying demand at locations such as airports and major train stations. The model assumes that, considering a particular decision horizon, a taxi driver decides to transfer to such a destination based on a reward function. The dynamic uncertainty of demand is captured by a time dependent pick-up probability, which is a cumulative distribution function of waiting time. The model allows for information learning by which taxi drivers update their beliefs from past experiences. A simulation on a real road network, applied to test the model, indicates that the formulated model dynamically improves passenger-finding strategies at the airport. Taxi drivers learn when to transfer to the airport in anticipation of the time-varying demand at the airport to minimize their waiting time.
Convert Harm Into Benefit: A Coordination-Learning Based Dynamic Spectrum Anti-Jamming Approach This paper mainly investigates the multi-user anti-jamming spectrum access problem. Using the idea of “converting harm into benefit,” the malicious jamming signals projected by the enemy are utilized by the users as the coordination signals to guide spectrum coordination. An “internal coordination-external confrontation” multi-user anti-jamming access game model is constructed, and the existence of Nash equilibrium (NE) as well as correlated equilibrium (CE) is demonstrated. A coordination-learning based anti-jamming spectrum access algorithm (CLASA) is designed to achieve the CE of the game. Simulation results show the convergence, and effectiveness of the proposed CLASA algorithm, and indicate that our approach can help users confront the malicious jammer, and coordinate internal spectrum access simultaneously without information exchange. Last but not least, the fairness of the proposed approach under different jamming attack patterns is analyzed, which illustrates that this approach provides fair anti-jamming spectrum access opportunities under complicated jamming pattern.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Human-Centered Feed-Forward Control of a Vehicle Steering System Based on a Driver's Path-Following Characteristics. To improve vehicle path-following performance and to reduce driver workload, a human-centered feed-forward control (HCFC) system for a vehicle steering system is proposed. To be specific, a novel dynamic control strategy for the steering ratio of vehicle steering systems that treats vehicle speed, lateral deviation, yaw error, and steering angle as the inputs and a driver's expected steering ratio...
A predictive controller for autonomous vehicle path tracking This paper presents a model predictive controller (MPC) structure for solving the path-tracking problem of terrestrial autonomous vehicles. To achieve the desired performance during high-speed driving, the controller architecture considers both the kinematic and the dynamic control in a cascade structure. Our study contains a comparative study between two kinematic linear predictive control strategies: The first strategy is based on the successive linearization concept, and the other strategy combines a local reference frame with an approaching path strategy. Our goal is to search for the strategy that best comprises the performance and hardware-cost criteria. For the dynamic controller, a decentralized predictive controller based on a linearized model of the vehicle is used. Practical experiments obtained using an autonomous "Mini-Baja" vehicle equipped with an embedded computing system are presented. These results confirm that the proposed MPC structure is the solution that better matches the target criteria.
An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Intelligent Transportation Systems The visibility of images of outdoor road scenes will generally become degraded when captured during inclement weather conditions. Drivers often turn on the headlights of their vehicles and streetlights are often activated, resulting in localized light sources in images capturing road scenes in these conditions. Additionally, sandstorms are also weather events that are commonly encountered when driving in some regions. In sandstorms, atmospheric sand has a propensity to irregularly absorb specific portions of a spectrum, thereby causing color-shift problems in the captured image. Traditional state-of-the-art restoration techniques are unable to effectively cope with these hazy road images that feature localized light sources or color-shift problems. In response, we present a novel and effective haze removal approach to remedy problems caused by localized light sources and color shifts, which thereby achieves superior restoration results for single hazy images. The performance of the proposed method has been proven through quantitative and qualitative evaluations. Experimental results demonstrate that the proposed haze removal technique can more effectively recover scene radiance while demanding fewer computational costs than traditional state-of-the-art haze removal techniques.
Automatic Detection and Classification of Road Lane Markings Using Onboard Vehicular Cameras This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear–parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima–minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions.
Pedestrian Tracking Using Online Boosted Random Ferns Learning in Far-Infrared Imagery for Safe Driving at Night. Pedestrian-vehicle accidents that occur at night are a major social problem worldwide. Advanced driver assistance systems that are equipped with cameras have been designed to automatically prevent such accidents. Among the various types of cameras used in such systems, far-infrared (FIR) cameras are favorable because they are invariant to illumination changes. Therefore, this paper focuses on a pedestrian nighttime tracking system with an FIR camera that is able to discern thermal energy and is mounted on the forward roof part of a vehicle. Since the temperature difference between the pedestrian and background depends on the season and the weather, we therefore propose two models to detect pedestrians according to the season and the weather, which are determined using Weber–Fechner's law. For tracking pedestrians, we perform real-time online learning to track pedestrians using boosted random ferns and update the trackers at each frame. In particular, we link detection responses to trajectories based on similarities in position, size, and appearance. There is no standard data set for evaluating the tracking performance using an FIR camera; thus, we created the Keimyung University tracking data set (KMUTD) by combining the KMU sudden pedestrian crossing (SPC) data set [21] for summer nights with additional tracking data for winter nights. The KMUTD contains video sequences involving a moving camera, moving pedestrians, sudden shape deformations, unexpected motion changes, and partial or full occlusions between pedestrians at night. The proposed algorithm is successfully applied to various pedestrian video sequences of the KMUTD; specifically, the proposed algorithm yields more accurate tracking performance than other existing methods.
Effects of Different Alcohol Dosages on Steering Behavior in Curve Driving. Objective: The aim of this article is to explore the detailed characteristics of steering behavior in curve driving at different alcohol dosages. Background: Improper operation of the steering wheel is a contributing factor to increased crash risks on curves. Method: The experiments were conducted using a driving simulator. Twenty-five licensed drivers were recruited to perform the experiments at the four different breath alcohol concentration (BrAC) levels. The steering angle (SA), steering speed (SS), steering reversal rate (SRR), and peak-to-peak value of the steering angle (PP) were used to characterize the steering behavior. The vehicle's speed and the number of lane exceedances per kilometer were also used to examine the driving performance. Results: The SSs on the 200 m (chi(2)(3) = 20.67, p < .001), 500 m (chi(2)(3) = 22.42, p < .001), and 800 m (chi(2)(3) = 22.86, p < .001) radius curves were significantly faster for drivers under the influence of alcohol compared with those given a placebo. There were significant effects of alcohol on the SRR and PP on the 200 m, 500 m, and 800 m radius curves. Conclusion: For all of the curves, the SS, SRR, and PP had a tendency to increase as the BrAC increased. The large PP at a high BrAC, accompanied by the high speed, SS, and SRR, resulted in a high probability of lane exceedance. The use of measures of SS, SRR, and PP aided in the improvement of the accuracy of the intoxication detection for the different types of curves. Application: The most important application is to provide guidance for detecting alcohol-impaired-driving.
Control Authority Transfer Method for Automated-to-Manual Driving Via a Shared Authority Mode. Although automation-initiated and driver-initiated control transfers from automated to manual driving may yield unstable steering activity even when the drivers are focused on the road environment ahead, there are few studies on the development of control transfer methods at an operational level after a request to intervene. Therefore, we propose a shared authority mode connecting the automated an...
A Forward Collision Warning Algorithm With Adaptation to Driver Behaviors. Significant effort has been made on designing user-acceptable driver assistance systems. To adapt to driver characteristics, this paper proposes a forward collision warning (FCW) algorithm that can adjust its warning thresholds in a real-time manner according to driver behavior changes, including both behavioral fluctuation and individual difference. This adaptive FCW algorithm overcomes the limit...
Human-Machine Collaboration For Automated Driving Using An Intelligent Two-Phase Haptic Interface Prior to realizing fully autonomous driving, human intervention is periodically required to guarantee vehicle safety. This poses a new challenge in human-machine interaction, particularly during the control authority transition from automated functionality to a human driver. Herein, this challenge is addressed by proposing an intelligent haptic interface based on a newly developed two-phase human-machine interaction model. The intelligent haptic torque is applied to the steering wheel and switches its functionality between predictive guidance and haptic assistance according to the varying state and control ability of human drivers. This helps drivers gradually resume manual control during takeover. The developed approach is validated by conducting vehicle experiments with 26 participants. The results suggest that the proposed method effectively enhances the driving state recovery and control performance of human drivers during takeover compared with an existing approach. Thus, this new method further improves the safety and smoothness of human-machine interaction in automated vehicles.
How Much Data Are Enough? A Statistical Approach With Case Study on Longitudinal Driving Behavior. Big data has shown its uniquely powerful ability to reveal, model, and understand driver behaviors. The amount of data affects the experiment cost and conclusions in the analysis. Insufficient data may lead to inaccurate models, whereas excessive data lead to waste resources. For projects that cost millions of dollars, it is critical to determine the right amount of data needed. However, how to de...
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Normalized Energy Density-Based Forensic Detection of Resampled Images We propose a new method to detect resampled imagery. The method is based on examining the normalized energy density present within windows of varying size in the second derivative of the image in the frequency domain, and exploiting this characteristic to derive a 19-D feature vector that is used to train a SVM classifier. Experimental results are reported on 7500 raw images from the BOSS database. Comparison with prior work reveals that the proposed algorithm performs similarly for resampling rates greater than 1, and is superior to prior work for resampling rates less than 1. Experiments are performed for both bilinear and bicubic interpolations, and qualitatively similar results are observed for each. Results are also provided for the detection of resampled imagery with noise corruption and JPEG compression. As expected, some degradation in performance is observed as the noise increases or the JPEG quality factor declines.
Adaptive Neural Network Control of a Flapping Wing Micro Aerial Vehicle With Disturbance Observer. The research of this paper works out the attitude and position control of the flapping wing micro aerial vehicle (FWMAV). Neural network control with full state and output feedback are designed to deal with uncertainties in this complex nonlinear FWMAV dynamic system and enhance the system robustness. Meanwhile, we design disturbance observers which are exerted into the FWMAV system via feedforwar...
Spatially Correlated Reconfigurable Intelligent Surfaces-Aided Cell-Free Massive MIMO Systems Reconfigurable intelligent surfaces (RISs) and cell-free (CF) massive multiple-input multiple-output (MIMO) are two promising technologies for realizing beyond-fifth generation (5G) networks. In this paper, we study the uplink spectral efficiency (SE) of a practical spatially correlated RISs-aided CF massive MIMO system over Rician fading channels. Specifically, we derive the closed-form expression for characterizing the uplink SE of the system, which shows that increasing the number of RIS elements can improve the system performance. Moreover, the results unveil that the spatial correlation of RIS has a significant impact on the system while leading to the best performance gain at a half wavelength spacing. Finally, the accuracy of our analytical results are verified by Monte-Carlo simulations.
1.070603
0.073333
0.073333
0.073333
0.073333
0.073333
0.073333
0.066667
0.066667
0.002519
0
0
0
0
QUasi-Affine TRansformation Evolution with External ARchive (QUATRE-EAR): An enhanced structure for Differential Evolution Optimization demands are ubiquitous in science and engineering. The key point is that the approach to tackle a complex optimization problem should not itself be difficult. Differential Evolution (DE) is such a simple method, and it is arguably a very powerful stochastic real-parameter algorithm for single-objective optimization. However, the performance of DE is highly dependent on control parameters and mutation strategies. Both tuning the control parameters and selecting the proper mutation strategy are still tedious but important tasks for users. In this paper, we proposed an enhanced structure for DE algorithm with less control parameters to be tuned. The crossover rate control parameter Cr is replaced by an automatically generated evolution matrix and the control parameter F can be renewed in an adaptive manner during the whole evolution. Moreover, an enhanced mutation strategy with time stamp mechanism is advanced as well in this paper. CEC2013 test suite for real-parameter single objective optimization is employed in the verification of the proposed algorithm. Experiment results show that our proposed algorithm is competitive with several well-known DE variants.
Tabu search based multi-watermarks embedding algorithm with multiple description coding Digital watermarking is a useful solution for digital rights management systems, and it has been a popular research topic in the last decade. Most watermarking related literature focuses on how to resist deliberate attacks by applying benchmarks to watermarked media that assess the effectiveness of the watermarking algorithm. Only a few papers have concentrated on the error-resilient transmission of watermarked media. In this paper, we propose an innovative algorithm for vector quantization (VQ) based image watermarking, which is suitable for error-resilient transmission over noisy channels. By incorporating watermarking with multiple description coding (MDC), the scheme we propose to embed multiple watermarks can effectively overcome channel impairments while retaining the capability for copyright and ownership protection. In addition, we employ an optimization technique, called tabu search, to optimize both the watermarked image quality and the robustness of the extracted watermarks. We have obtained promising simulation results that demonstrate the utility and practicality of our algorithm. (C) 2011 Elsevier Inc. All rights reserved.
On Deployment of Wireless Sensors on 3-D Terrains to Maximize Sensing Coverage by Utilizing Cat Swarm Optimization With Wavelet Transform. In this paper, a deterministic sensor deployment method based on wavelet transform (WT) is proposed. It aims to maximize the quality of coverage of a wireless sensor network while deploying a minimum number of sensors on a 3-D surface. For this purpose, a probabilistic sensing model and Bresenham's line of sight algorithm are utilized. The WT is realized by an adaptive thresholding approach for the generation of the initial population. Another novel aspect of the paper is that the method followed utilizes a Cat Swarm Optimization (CSO) algorithm, which mimics the behavior of cats. We have modified the CSO algorithm so that it can be used for sensor deployment problems on 3-D terrains. The performance of the proposed algorithm is compared with the Delaunay Triangulation and Genetic Algorithm based methods. The results reveal that CSO based sensor deployment which utilizes the wavelet transform method is a powerful and successful method for sensor deployment on 3-D terrains.
FPGA-Based Parallel Metaheuristic PSO Algorithm and Its Application to Global Path Planning for Autonomous Robot Navigation This paper presents a field-programmable gate array (FPGA)-based parallel metaheuristic particle swarm optimization algorithm (PPSO) and its application to global path planning for autonomous robot navigating in structured environments with obstacles. This PPSO consists of three parallel PSOs along with a communication operator in one FPGA chip. The parallel computing architecture takes advantages of maintaining better population diversity and inhibiting premature convergence in comparison with conventional PSOs. The collision-free discontinuous path generated from the PPSO planner is then smoothed using the cubic B-spline and system-on-a-programmable-chip (SoPC) technology. Experimental results are conducted to show the merit of the proposed FPGA-based PPSO path planner and smoother for global path planning of autonomous mobile robot navigation.
EHCR-FCM: Energy Efficient Hierarchical Clustering and Routing using Fuzzy C-Means for Wireless Sensor Networks Wireless Sensor Network (WSN) is a part of Internet of Things (IoT), and has been used for sensing and collecting the important information from the surrounding environment. Energy consumption in this process is the most important issue, which primarily depends on the clustering technique and packet routing strategy. In this paper, we propose an Energy efficient Hierarchical Clustering and Routing using Fuzzy C-Means (EHCR-FCM) which works on three-layer structure, and depends upon the centroid of the clusters and grids, relative Euclidean distances and residual energy of the nodes. This technique is useful for the optimal usage of energy by employing grid and cluster formation in a dynamic manner and energy-efficient routing. The fitness value of the nodes have been used in this proposed work to decide that whether it may work as the Grid Head (GH) or Cluster Head (CH). The packet routing strategy of all the GHs depend upon the relative Euclidean distances among them, and also on their residual energy. In addition to this, we have also performed the energy consumption analysis, and found that our proposed approach is more energy efficient, better in terms of the number of cluster formation, network lifetime, and it also provides better coverage.
Hybrid Bird Swarm Optimized Quasi Affine Algorithm Based Node Location in Wireless Sensor Networks Wireless sensor networks (WSN) with the Internet of Things (IoT) play a vital key concept while performing the information transmission process. The WSN with IoT has been effectively utilized in different research contents such as network protocol selection, topology control, node deployment, location technology and network security, etc. Among that, node location is one of the crucial problems that need to be resolved to improve communication. The node location is directly influencing the network performance, lifetime and data sense. Therefore, this paper introduces the Bird Swarm Optimized Quasi-Affine Evolutionary Algorithm (BSOQAEA) to fix the node location problem in sensor networks. The proposed algorithm analyzes the node location, and incorporates the dynamic shrinking space process is to save time. The introduced evolutionary algorithm optimizes the node centroid location performed according to the received signal strength indications (RSSI). The created efficiency in the system is determined using high node location accuracy, minimum distance error, and location error.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Adam: A Method for Stochastic Optimization. We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
Untangling Blockchain: A Data Processing View of Blockchain Systems. Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core ...
Multivariate Short-Term Traffic Flow Forecasting Using Time-Series Analysis Existing time-series models that are used for short-term traffic condition forecasting are mostly univariate in nature. Generally, the extension of existing univariate time-series models to a multivariate regime involves huge computational complexities. A different class of time-series models called structural time-series model (STM) (in its multivariate form) has been introduced in this paper to develop a parsimonious and computationally simple multivariate short-term traffic condition forecasting algorithm. The different components of a time-series data set such as trend, seasonal, cyclical, and calendar variations can separately be modeled in STM methodology. A case study at the Dublin, Ireland, city center with serious traffic congestion is performed to illustrate the forecasting strategy. The results indicate that the proposed forecasting algorithm is an effective approach in predicting real-time traffic flow at multiple junctions within an urban transport network.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
G2-type SRMPC scheme for synchronous manipulation of two redundant robot arms. In this paper, to remedy the joint-angle drift phenomenon for manipulation of two redundant robot arms, a novel scheme for simultaneous repetitive motion planning and control (SRMPC) at the joint-acceleration level is proposed, which consists of two subschemes. To do so, the performance index of each SRMPC subscheme is derived and designed by employing the gradient dynamics twice, of which a convergence theorem and its proof are presented. In addition, for improving the accuracy of the motion planning and control, position error, and velocity, error feedbacks are incorporated into the forward kinematics equation and analyzed via Zhang neural-dynamics method. Then the two subschemes are simultaneously reformulated as two quadratic programs (QPs), which are finally unified into one QP problem. Furthermore, a piecewise-linear projection equation-based neural network (PLPENN) is used to solve the unified QP problem, which can handle the strictly convex QP problem in an inverse-free manner. More importantly, via such a unified QP formulation and the corresponding PLPENN solver, the synchronism of two redundant robot arms is guaranteed. Finally, two given tasks are fulfilled by 2 three-link and 2 five-link planar robot arms, respectively. Computer-simulation results validate the efficacy and accuracy of the SRMPC scheme and the corresponding PLPENN solver for synchronous manipulation of two redundant robot arms.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Missing Tag Identification in COTS RFID Systems: Bridging the Gap between Theory and Practice With rapid development of radio frequency identification (RFID) technology, ever-increasing research effort has been dedicated to devising various RFID-enabled services. The missing tag identification, which is to identify all missing tags, is one of the most important services in many Internet-of-Things applications such as inventory management. Prior work on missing tag detection all rely on hash functions implemented at individual tags. However, in reality hash functions are not supported by commercial off-the-shelf (COTS) RFID tags. To bridge this gap between theory and practice, this paper is devoted to detecting missing tags with COTS Gen2 devices. We first introduce a point-to-multipoint protocol, named P2M that works in an analog frame slotted Aloha paradigm to interrogate tags and collect their electronic product codes (EPCs). A missing tag will be found if its EPC is not present in the collected ones. To reduce time cost of P2M resulted from tag response collisions, we further present a collision-free point-to-point protocol, named P2P that selectively specifies a tag to reply with its EPC in each slot. If the EPC is not received, this tag is regarded to be missing. We develop two bitmask selection methods to enable the selective query while reducing communication overhead. We implement P2M and P2P with COTS RFID devices and evaluate their performance under diverse settings.
Revisiting unknown RFID tag identification in large-scale internet of things. RFID is a major prerequisite for the IoT, which connects physical objects with the Internet. Unknown tag identification is a fundamental problem in large-scale IoT systems, such as automatic stock management and object tracking. Recently, several protocols have been proposed to discern unknown tags. In this article, we overview the underlying mechanism of previous protocols, and pinpoint the challenging issues together with possible solutions. Then we propose a scheme using a Bloom filter that significantly reduces the data transmission during the identification process. We further present the preliminary results to illuminate the Bloom-filter- based architecture.
Efficient and Reliable Missing Tag Identification for Large-Scale RFID Systems With Unknown Tags. Radio frequency identification (RFID), which promotes the rapid development of Internet of Things (IoT), has been an emerging technology and widely deployed in various applications such as warehouse management, supply chain management, and social networks. In such applications, objects can be efficiently managed by attaching them with low-cost RFID tags and carefully monitoring them. The missing o...
Efficient Unknown Tag Detection in Large-Scale RFID Systems With Unreliable Channels. One of the most important applications of radio frequency identification (RFID) technology is to detect unknown tags brought by new tagged items, misplacement, or counterfeit tags. While unknown tag identification is able to pinpoint all the unknown tags, probabilistic unknown tag detection is preferred in large-scale RFID systems that need to be frequently checked up, e.g., real-time inventory mo...
On Efficient Tree-Based Tag Search in Large-Scale RFID Systems Tag search, which is to find a particular set of tags in a radio frequency identification (RFID) system, is a key service in such important Internet-of-Things applications as inventory management. When the system scale is large with a massive number of tags, deterministic search can be prohibitively expensive, and probabilistic search has been advocated, seeking a balance between reliability and time efficiency. Given a failure probability <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\frac {1}{\mathcal {O}(K)}$ </tex-math></inline-formula> , where <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula> is the number of tags, state-of-the-art solutions have achieved a time cost of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {O}(K \log K)$ </tex-math></inline-formula> through multi-round hashing and verification. Further improvement, however, faces a critical bottleneck of repetitively verifying each individual target tag in each round. In this paper, we present an efficient tree-based tag search (TTS) that approaches <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {O}(K)$ </tex-math></inline-formula> through batched verification. The key novelty of TTS is to smartly hash multiple tags into each internal tree node and adaptively control the node degrees. It conducts bottom–up search to verify tags group by group with the number of groups decreasing rapidly. Furthermore, we design an enhanced tag search scheme, referred to as TTS+, to overcome the negative impact of asymmetric tag set sizes on time efficiency of TTS. TTS+ first rules out partial ineligible tags with a filtering vector and feeds the shrunk tag sets into TTS. We derive the optimal hash code length and node degrees in TTS to accommodate hash collisions and the optimal filtering vector size to minimize the time cost of TTS+. The superiority of TTS and TTS+ over the state-of-the-art solution is demonstrated through both theoretical analysis and extensive simulations. Specifically, as reliability demand on scales, the time efficiency of TTS+ reaches nearly 2 times at most that of TTS.
Efficiently and Completely Identifying Missing Key Tags for Anonymous RFID Systems. Radio frequency identification (RFID) systems can be applied to efficiently identify the missing items by attaching them with tags. Prior missing tag identification protocols concentrated on identifying all of the tags. However, there may be some scenarios in which we just care about the key tags instead of all tags, making it inefficient to merely identify the missing key tags due to the interfer...
Identification-free batch authentication for RFID tags Cardinality estimation and tag authentication are two major issues in large-scale Radio Frequency Identification (RFID) systems. While there exist both per-tag and probabilistic approaches for the cardinality estimation, the RFID-oriented authentication protocols are mainly per-tag based: the reader authenticates one tag at each time. For a batch of tags, current RFID systems have to identify them and then authenticate each tag sequentially, incurring large volume of authentication data and huge communication cost. We study the RFID batch authentication issue and propose the first probabilistic approach, termed as Single Echo based Batch Authentication (SEBA), to meet the requirement of prompt and reliable batch authentications in large scale RFID applications, e.g., the anti-counterfeiting solution. Without the need of identifying tags, SEBA provides a provable probabilistic guarantee that the percentage of potential counterfeit products is under the user-defined threshold. The experimental result demonstrates the effectiveness of SEBA in fast batch authentications and significant improvement compared to existing approaches.
A survey on sensor networks The advancement in wireless communications and electronics has enabled the development of low-cost sensor networks. The sensor networks can be used for various application areas (e.g., health, military, home). For different application areas, there are different technical issues that researchers are currently resolving. The current state of the art of sensor networks is captured in this article, where solutions are discussed under their related protocol stack layer sections. This article also points out the open research issues and intends to spark new interests and developments in this field.
Energy-Aware Task Offloading and Resource Allocation for Time-Sensitive Services in Mobile Edge Computing Systems Mobile Edge Computing (MEC) is a promising architecture to reduce the energy consumption of mobile devices and provide satisfactory quality-of-service to time-sensitive services. How to jointly optimize task offloading and resource allocation to minimize the energy consumption subject to the latency requirement remains an open problem, which motivates this paper. When the latency constraint is tak...
On signatures of knowledge In a traditional signature scheme, a signature σ on a message m is issued under a public key PK, and can be interpreted as follows: “The owner of the public key PK and its corresponding secret key has signed message m.” In this paper we consider schemes that allow one to issue signatures on behalf of any NP statement, that can be interpreted as follows: “A person in possession of a witness w to the statement that x ∈L has signed message m.” We refer to such schemes as signatures of knowledge. We formally define the notion of a signature of knowledge. We begin by extending the traditional definition of digital signature schemes, captured by Canetti's ideal signing functionality, to the case of signatures of knowledge. We then give an alternative definition in terms of games that also seems to capture the necessary properties one may expect from a signature of knowledge. We then gain additional confidence in our two definitions by proving them equivalent. We construct signatures of knowledge under standard complexity assumptions in the common-random-string model. We then extend our definition to allow signatures of knowledge to be nested i.e., a signature of knowledge (or another accepting input to a UC-realizable ideal functionality) can itself serve as a witness for another signature of knowledge. Thus, as a corollary, we obtain the first delegatable anonymous credential system, i.e., a system in which one can use one's anonymous credentials as a secret key for issuing anonymous credentials to others.
An evaluation of direct attacks using fake fingers generated from ISO templates This work reports a vulnerability evaluation of a highly competitive ISO matcher to direct attacks carried out with fake fingers generated from ISO templates. Experiments are carried out on a fingerprint database acquired in a real-life scenario and show that the evaluated system is highly vulnerable to the proposed attack scheme, granting access in over 75% of the attempts (for a high-security operating point). Thus, the study disproves the popular belief of minutiae templates non-reversibility and raises a key vulnerability issue in the use of non-encrypted standard templates. (This article is an extended version of Galbally et al., 2008, which was awarded with the IBM Best Student Paper Award in the track of Biometrics at ICPR 2008).
A Framework of Joint Mobile Energy Replenishment and Data Gathering in Wireless Rechargeable Sensor Networks Recent years have witnessed the rapid development and proliferation of techniques on improving energy efficiency for wireless sensor networks. Although these techniques can relieve the energy constraint on wireless sensors to some extent, the lifetime of wireless sensor networks is still limited by sensor batteries. Recent studies have shown that energy rechargeable sensors have the potential to provide perpetual network operations by capturing renewable energy from external environments. However, the low output of energy capturing devices can only provide intermittent recharging opportunities to support low-rate data services due to spatial-temporal, geographical or environmental factors. To provide steady and high recharging rates and achieve energy efficient data gathering from sensors, in this paper, we propose to utilize mobility for joint energy replenishment and data gathering. In particular, a multi-functional mobile entity, called SenCarin this paper, is employed, which serves not only as a mobile data collector that roams over the field to gather data via short-range communication but also as an energy transporter that charges static sensors on its migration tour via wireless energy transmissions. Taking advantages of SenCar's controlled mobility, we focus on the joint optimization of effective energy charging and high-performance data collections. We first study this problem in general networks with random topologies. We give a two-step approach for the joint design. In the first step, the locations of a subset of sensors are periodically selected as anchor points, where the SenCar will sequentially visit to charge the sensors at these locations and gather data from nearby sensors in a multi-hop fashion. To achieve a desirable balance between energy replenishment amount and data gathering latency, we provide a selection algorithm to search for a maximum number of anchor points where sensors hold the least battery energy, and meanwhile by visiting them, - he tour length of the SenCar is no more than a threshold. In the second step, we consider data gathering performance when the SenCar migrates among these anchor points. We formulate the problem into a network utility maximization problem and propose a distributed algorithm to adjust data rates at which sensors send buffered data to the SenCar, link scheduling and flow routing so as to adapt to the up-to-date energy replenishing status of sensors. Besides general networks, we also study a special scenario where sensors are regularly deployed. For this case we can provide a simplified solution of lower complexity by exploiting the symmetry of the topology. Finally, we validate the effectiveness of our approaches by extensive numerical results, which show that our solutions can achieve perpetual network operations and provide high network utility.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
A Hierarchical Architecture Using Biased Min-Consensus for USV Path Planning This paper proposes a hierarchical architecture using the biased min-consensus (BMC) method, to solve the path planning problem of unmanned surface vessel (USV). We take the fixed-point monitoring mission as an example, where a series of intermediate monitoring points should be visited once by USV. The whole framework incorporates the low-level layer planning the standard path between any two intermediate points, and the high-level fashion determining their visiting sequence. First, the optimal standard path in terms of voyage time and risk measure is planned by the BMC protocol, given that the corresponding graph is constructed with node state and edge weight. The USV will avoid obstacles or keep a certain distance safely, and arrive at the target point quickly. It is proven theoretically that the state of the graph will converge to be stable after finite iterations, i.e., the optimal solution can be found by BMC with low calculation complexity. Second, by incorporating the constraint of intermediate points, their visiting sequence is optimized by BMC again with the reconstruction of a new virtual graph based on the former planned results. The extensive simulation results in various scenarios also validate the feasibility and effectiveness of our method for autonomous navigation.
1.2
0.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
0
Efficient algorithms for Web services selection with end-to-end QoS constraints Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.
A preliminary performance comparison of five machine learning algorithms for practical IP traffic flow classification The identification of network applications through observation of associated packet traffic flows is vital to the areas of network management and surveillance. Currently popular methods such as port number and payload-based identification exhibit a number of shortfalls. An alternative is to use machine learning (ML) techniques and identify network applications based on per-flow statistics, derived from payload-independent features such as packet length and inter-arrival time distributions. The performance impact of feature set reduction, using Consistency-based and Correlation-based feature selection, is demonstrated on Naïve Bayes, C4.5, Bayesian Network and Naïve Bayes Tree algorithms. We then show that it is useful to differentiate algorithms based on computational performance rather than classification accuracy alone, as although classification accuracy between the algorithms is similar, computational performance can differ significantly.
A Support Vector Machine Based Approach for Forecasting of Network Weather Services We present forecasting related results using a recently introduced technique called Support Vector Machines (SVM) for measurements of processing, memory, disk space, communication latency and bandwidth derived from Network Weather Services (NWS). We then compare the performance of support vector machines with the forecasting techniques existing in network weather services using a set of metrics like mean absolute error, mean square error among others. The models are used to make predictions for several future time steps as against the present network weather services method of just the immediate future time step. The number of future time steps for which the prediction is done is referred to as the depth of prediction set. The support vector machines forecasts are found to be more accurate and outperform the existing methods. The performance improvement using support vector machines becomes more pronounced as the depth of the prediction set increases. The data gathered is from a production environment (i.e., non-experimental).
End-to-end quality adaptation scheme based on QoE prediction for video streaming service in LTE networks How to measure the user's feeling about mobile video service and to improve the quality of experience (QoE), has become a concern of network operators and service providers. In this paper, we first investigate the QoE evaluation method for video streaming over Long-Term Evolution (LTE) networks, and propose an end-to-end video quality prediction model based on the gradient boosting machine. In the proposed QoE prediction model, cross-layer parameters extracted from the network layer, the application layer, video content and user equipment are taken into account. Validation results show that our proposed model outperforms ITU-T G.1070 model with a smaller root mean squared error and a higher Pearson correlation coefficient. Second, a window-based bit rate adaptation scheme, which is implemented in the video streaming server, is proposed to improve the quality of video streaming service in LTE networks. In the proposed scheme, the encoding bit rate is adjusted according to two control parameters, the value of predicted QoE and the feedback congestion state of the network. Simulation results show that our proposed end-to-end quality adaptation scheme efficiently improves user-perceived quality compared to the scenarios with fixed bit rates.
Estimating Video Streaming QoE in the 5G Architecture Using Machine Learning Compared to earlier mobile network generations, the 5G system architecture has been significantly enhanced by the introduction of network analytics functionalities and ex- tended capabilities of interacting with third party Application Functions (AFs). Combining these capabilities, new features for Quality of Experience (QoE) estimation can be designed and introduced in next generation networks. It is, however, unclear how 5G networks can collect monitoring data and application metrics, how they correlate to each other, and which techniques can be used in 5G systems for QoE estimation. This paper studies the feasibility of Machine Learning (ML) techniques for QoE estimation and evaluates their performance for a mobile video streaming use-case. A simulator has been implemented with OMNeT++ for generating traces to (i) examine the relevance of features generated from 5G monitoring data and (ii) to study the QoE estimation accuracy (iii) for a variable number of used features.
A generation-based optimal restart strategy for surrogate-assisted social learning particle swarm optimization. Evolutionary algorithm provides a powerful tool to the solution of modern complex engineering optimization problems. In general, a great deal of evaluation effort often requires to be made in evolutionary optimization to locate a reasonable optimum. This poses a serious challenge to extend its application to computationally expensive problems. To alleviate this difficulty, surrogate-assisted evolutionary algorithms (SAEAs) have drawn great attention over the past decades. However, in order to ensure the performance of SAEAs, the use of appropriate model management is indispensable. This paper proposes a generation-based optimal restart strategy for a surrogate-assisted social learning particle swarm optimization (SL-PSO). In the proposed method, the SL-PSO restarts every few generations in the global radial-basis-function model landscape, and the best sample points archived in the database are employed to reinitialize the swarm at each restart. Promising individual with the best estimated fitness value is chosen for exact evaluation before each restart of the SL-PSO. The proposed method skillfully integrates the restart strategy, generation-based and individual-based model managements into a whole, whilst those three ingredients coordinate with each other, thus offering a powerful optimizer for the computationally expensive problems. To assess the performance of the proposed method, comprehensive experiments are conducted on a benchmark test suit of dimensions ranging from 10 to 100. Experimental results demonstrate that the proposed method shows superior performance in comparison with four state-of-the-art algorithms in a majority of benchmarks when only a limited computational budget is available.
Comprehensive Learning Particle Swarm Optimization Algorithm with Local Search for Multimodal Functions A comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) is proposed to pursue higher optimization performance by taking the advantages of CLPSO’s strong global search capability and LS’s fast convergence ability. This work proposes an adaptive LS starting strategy by utilizing our proposed quasi-entropy index to address its key issue, i.e., when to start LS. The changes of the index as the optimization proceeds are analyzed in theory and via numerical tests. The proposed algorithm is tested on multimodal benchmark functions. Parameter sensitivity analysis is performed to demonstrate its robustness. The comparison results reveal overall higher convergence rate and accuracy than those of CLPSO, state-of-the-art PSO variants.
Dendritic Neuron Model With Effective Learning Algorithms for Classification, Approximation, and Prediction. An artificial neural network (ANN) that mimics the information processing mechanisms and procedures of neurons in human brains has achieved a great success in many fields, e.g., classification, prediction, and control. However, traditional ANNs suffer from many problems, such as the hard understanding problem, the slow and difficult training problems, and the difficulty to scale them up. These problems motivate us to develop a new dendritic neuron model (DNM) by considering the nonlinearity of synapses, not only for a better understanding of a biological neuronal system, but also for providing a more useful method for solving practical problems. To achieve its better performance for solving problems, six learning algorithms including biogeography-based optimization, particle swarm optimization, genetic algorithm, ant colony optimization, evolutionary strategy, and population-based incremental learning are for the first time used to train it. The best combination of its user-defined parameters has been systemically investigated by using the Taguchi's experimental design method. The experiments on 14 different problems involving classification, approximation, and prediction are conducted by using a multilayer perceptron and the proposed DNM. The results suggest that the proposed learning algorithms are effective and promising for training DNM and thus make DNM more powerful in solving classification, approximation, and prediction problems.
A new approach for dynamic fuzzy logic parameter tuning in Ant Colony Optimization and its application in fuzzy control of a mobile robot Central idea is to avoid or slow down full convergence through the dynamic variation of parameters.Performance of different ACO variants was observed to choose one as the basis to the proposed approach.Convergence fuzzy controller with the objective of maintaining diversity to avoid premature convergence was created. Ant Colony Optimization is a population-based meta-heuristic that exploits a form of past performance memory that is inspired by the foraging behavior of real ants. The behavior of the Ant Colony Optimization algorithm is highly dependent on the values defined for its parameters. Adaptation and parameter control are recurring themes in the field of bio-inspired optimization algorithms. The present paper explores a new fuzzy approach for diversity control in Ant Colony Optimization. The main idea is to avoid or slow down full convergence through the dynamic variation of a particular parameter. The performance of different variants of the Ant Colony Optimization algorithm is analyzed to choose one as the basis to the proposed approach. A convergence fuzzy logic controller with the objective of maintaining diversity at some level to avoid premature convergence is created. Encouraging results on several traveling salesman problem instances and its application to the design of fuzzy controllers, in particular the optimization of membership functions for a unicycle mobile robot trajectory control are presented with the proposed method.
Cooperative Cleaners: A Study in Ant Robotics In the world of living creatures, simple-minded animals often cooperate to achieve common goals with amazing performance. One can consider this idea in the context of robotics, and suggest models for programming goal-oriented behavior into the members of a group of simple robots lacking global supervision. This can be done by controlling the local interactions between the robot agents, to have them jointly carry out a given mission. As a test case we analyze the problem of many simple robots cooperating to clean the dirty floor of a non-convex region in Z2, using the dirt on the floor as the main means of inter-robot communication.
Harmony search algorithm for solving Sudoku Harmony search (HS) algorithm was applied to solving Sudoku puzzle. The HS is an evolutionary algorithm which mimics musicians' behaviors such as random play, memory-based play, and pitch-adjusted play when they perform improvisation. Sudoku puzzles in this study were formulated as an optimization problem with number-uniqueness penalties. HS could successfully solve the optimization problem after 285 function evaluations, taking 9 seconds. Also, sensitivity analysis of HS parameters was performed to obtain a better idea of algorithm parameter values.
Robust and Imperceptible Dual Watermarking for Telemedicine Applications In this paper, the effects of different error correction codes on the robustness and imperceptibility of discrete wavelet transform and singular value decomposition based dual watermarking scheme is investigated. Text and image watermarks are embedded into cover radiological image for their potential application in secure and compact medical data transmission. Four different error correcting codes such as Hamming, the Bose, Ray-Chaudhuri, Hocquenghem (BCH), the Reed---Solomon and hybrid error correcting (BCH and repetition code) codes are considered for encoding of text watermark in order to achieve additional robustness for sensitive text data such as patient identification code. Performance of the proposed algorithm is evaluated against number of signal processing attacks by varying the strength of watermarking and covers image modalities. The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image.This algorithm combines the advantages and removes the disadvantages of the two transform techniques. Out of the three error correcting codes tested, it has been found that Reed---Solomon shows the best performance. Further, a hybrid model of two of the error correcting codes (BCH and repetition code) is concatenated and implemented. It is found that the hybrid code achieves better results in terms of robustness. This paper provides a detailed analysis of the obtained experimental results.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) fool pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.1055
0.1
0.1
0.1
0.05
0.005
0.003333
0.000333
0
0
0
0
0
0
When Intrusion Detection Meets Blockchain Technology: A Review. With the purpose of identifying cyber threats and possible incidents, intrusion detection systems (IDSs) are widely deployed in various computer networks. In order to enhance the detection capability of a single IDS, collaborative intrusion detection networks (or collaborative IDSs) have been developed, which allow IDS nodes to exchange data with each other. However, data and trust management still remain two challenges for current detection architectures, which may degrade the effectiveness of such detection systems. In recent years, blockchain technology has shown its adaptability in many fields, such as supply chain management, international payment, interbanking, and so on. As blockchain can protect the integrity of data storage and ensure process transparency, it has a potential to be applied to intrusion detection domain. Motivated by this, this paper provides a review regarding the intersection of IDSs and blockchains. In particular, we introduce the background of intrusion detection and blockchain, discuss the applicability of blockchain to intrusion detection, and identify open challenges in this direction.
Witness indistinguishable and witness hiding protocols
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Publicly Verifiable Computation of Polynomials Over Outsourced Data With Multiple Sources. Among all types of computations, the polynomial function evaluation is a fundamental, yet an important one due to its wide usage in the engineering and scientific problems. In this paper, we investigate publicly verifiable outsourced computation for polynomial evaluation with the support of multiple data sources. Our proposed verification scheme is universally applicable to all types of polynomial...
Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing. Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.
A Blockchain Based Truthful Incentive Mechanism for Distributed P2P Applications. In distributed peer-to-peer (P2P) applications, peers self-organize and cooperate to effectively complete certain tasks such as forwarding files, delivering messages, or uploading data. Nevertheless, users are selfish in nature and they may refuse to cooperate due to their concerns on energy and bandwidth consumption. Thus each user should receive a satisfying reward to compensate its resource consumption for cooperation. However, suitable incentive mechanisms that can meet the diverse requirements of users in dynamic and distributed P2P environments are still missing. On the other hand, we observe that Blockchain is a decentralized secure digital ledger of economic transactions that can be programmed to record not just financial transactions and Blockchain-based cryptocurrencies get more and more market capitalization. Therefore in this paper, we propose a Blockchain based truthful incentive mechanism for distributed P2P applications that applies a cryptocurrency such as Bitcoin to incentivize users for cooperation. In this mechanism, users who help with a successful delivery get rewarded. As users and miners in the Blockchain P2P system may exhibit selfish actions or collude with each other, we propose a secure validation method and a pricing strategy, and integrate them into our incentive mechanism. Through a game theoretical analysis and evaluation study, we demonstrate the effectiveness and security strength of our proposed incentive mechanism.
A survey of intrusion detection systems based on ensemble and hybrid classifiers. Due to the frequency of malicious network activities and network policy violations, intrusion detection systems (IDSs) have emerged as a group of methods that combats the unauthorized use of a network's resources. Recent advances in information technology have produced a wide variety of machine learning methods, which can be integrated into an IDS. This study presents an overview of intrusion classification algorithms, based on popular methods in the field of machine learning. Specifically, various ensemble and hybrid techniques were examined, considering both homogeneous and heterogeneous types of ensemble methods. In addition, special attention was paid to those ensemble methods that are based on voting techniques, as those methods are the simplest to implement and generally produce favorable results. A survey of recent literature shows that hybrid methods, where feature selection or a feature reduction component is combined with a single-stage classifier, have become commonplace. Therefore, the scope of this study has been expanded to encompass hybrid classifiers.
A Survey on Big Data Market: Pricing, Trading and Protection. Big data is considered to be the key to unlocking the next great waves of growth in productivity. The amount of collected data in our world has been exploding due to a number of new applications and technologies that permeate our daily lives, including mobile and social networking applications, and Internet of Thing-based smart-world systems (smart grid, smart transportation, smart cities, and so on). With the exponential growth of data, how to efficiently utilize the data becomes a critical issue. This calls for the development of a big data market that enables efficient data trading. Via pushing data as a kind of commodity into a digital market, the data owners and consumers are able to connect with each other, sharing and further increasing the utility of data. Nonetheless, to enable such an effective market for data trading, several challenges need to be addressed, such as determining proper pricing for the data to be sold or purchased, designing a trading platform and schemes to enable the maximization of social welfare of trading participants with efficiency and privacy preservation, and protecting the traded data from being resold to maintain the value of the data. In this paper, we conduct a comprehensive survey on the lifecycle of data and data trading. To be specific, we first study a variety of data pricing models, categorize them into different groups, and conduct a comprehensive comparison of the pros and cons of these models. Then, we focus on the design of data trading platforms and schemes, supporting efficient, secure, and privacy-preserving data trading. Finally, we review digital copyright protection mechanisms, including digital copyright identifier, digital rights management, digital encryption, watermarking, and others, and outline challenges in data protection in the data trading lifecycle.
Model-based periodic event-triggered control for linear systems Periodic event-triggered control (PETC) is a control strategy that combines ideas from conventional periodic sampled-data control and event-triggered control. By communicating periodically sampled sensor and controller data only when needed to guarantee stability or performance properties, PETC is capable of reducing the number of transmissions significantly, while still retaining a satisfactory closed-loop behavior. In this paper, we will study observer-based controllers for linear systems and propose advanced event-triggering mechanisms (ETMs) that will reduce communication in both the sensor-to-controller channels and the controller-to-actuator channels. By exploiting model-based computations, the new classes of ETMs will outperform existing ETMs in the literature. To model and analyze the proposed classes of ETMs, we present two frameworks based on perturbed linear and piecewise linear systems, leading to conditions for global exponential stability and @?"2-gain performance of the resulting closed-loop systems in terms of linear matrix inequalities. The proposed analysis frameworks can be used to make tradeoffs between the network utilization on the one hand and the performance in terms of @?"2-gains on the other. In addition, we will show that the closed-loop performance realized by an observer-based controller, implemented in a conventional periodic time-triggered fashion, can be recovered arbitrarily closely by a PETC implementation. This provides a justification for emulation-based design. Next to centralized model-based ETMs, we will also provide a decentralized setup suitable for large-scale systems, where sensors and actuators are physically distributed over a wide area. The improvements realized by the proposed model-based ETMs will be demonstrated using numerical examples.
A feature-based robust digital image watermarking scheme A robust digital image watermarking scheme that combines image feature extraction and image normalization is proposed. The goal is to resist both geometric distortion and signal processing attacks. We adopt a feature extraction method called Mexican hat wavelet scale interaction. The extracted feature points can survive a variety of attacks and be used as reference points for both watermark embedding and detection. The normalized image of an image (object) is nearly invariant with respect to rotations. As a result, the watermark detection task can be much simplified when it is applied to the normalized image. However, because image normalization is sensitive to image local variation, we apply image normalization to nonoverlapped image disks separately. The disks are centered at the extracted feature points. Several copies of a 16-bit watermark sequence are embedded in the original image to improve the robustness of watermarks. Simulation results show that our scheme can survive low-quality JPEG compression, color reduction, sharpening, Gaussian filtering, median filtering, row or column removal, shearing, rotation, local warping, cropping, and linear geometric transformations.
Prediction, Detection, and Correction of Faraday Rotation in Full-Polarimetric L-Band SAR Data With the synthetic aperture radar (SAR) sensor PALSAR onboard the Advanced Land Observing Satellite, a new full-polarimetric spaceborne L-band SAR instrument has been launched into orbit. At L-band, Faraday rotation (FR) can reach significant values, degrading the quality of the received SAR data. One-way rotations exceeding 25 deg are likely to happen during the lifetime of PALSAR, which will significantly reduce the accuracy of geophysical parameter recovery if uncorrected. Therefore, the estimation and correction of FR effects is a prerequisite for data quality and continuity. In this paper, methods for estimating FR are presented and analyzed. The first unambiguous detection of FR in SAR data is presented. A set of real data examples indicates the quality and sensitivity of FR estimation from PALSAR data, allowing the measurement of FR with high precision in areas where such measurements were previously inaccessible. In examples, we present the detection of kilometer-scale ionospheric disturbances, a spatial scale that is not detectable by ground-based GPS measurements. An FR prediction method is presented and validated. Approaches to correct for the estimated FR effects are applied, and their effectiveness is tested on real data.
Haptic feedback for enhancing realism of walking simulations. In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.
Deep Learning in Mobile and Wireless Networking: A Survey. The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.050136
0.050545
0.050545
0.050545
0.050545
0.05
0.025
0.000197
0
0
0
0
0
0
Stochastic Geometry Approach Towards Interference Management And Control In Cognitive Radio Network: A Survey Interference management and control in the cognitive radio network (CRN) is a necessity if the activities of primary users must be protected from excessive interference resulting from the activities of neighboring users. Hence, interference experienced in wireless communication networks has earlier been characterized using the traditional grid model. Such models, however, lead to non-tractable analyses, which often require unrealistic assumptions, leading to inaccurate results. These limitations of the traditional grid models mean that the adoption of stochastic geometry (SG) continues to receive a lot of attention owing to its ability to capture the distribution of users properly, while producing scalable and tractable analyses for various performance metrics of interest. Despite the importance of CRN to next-generation networks, no survey of the existing literature has been done when it comes to SG-based interference management and control in the domain of CRN. Such a survey is, however, necessary to provide the current state of the art as well as future directions. This paper hence presents a comprehensive survey related to the use of SG to effect interference management and control in CRN. We show that most of the existing approaches in CRN failed to capture the relationship between the spatial location of users and temporal traffic dynamics and are only restricted to interference modeling among non-mobile users with full buffers. This survey hence encourages further research in this area. Finally, this paper provides open problems and future directions to aid in finding more solutions to achieve efficient and effective usage of the scarce spectral resources for wireless communications.
Interference Alignment and Degrees of Freedom of the K-User Interference Channel For the fully connected K user wireless interference channel where the channel coefficients are time-varying and are drawn from a continuous distribution, the sum capacity is characterized as C(SNR)=K/2log(SNR)+o(log(SNR)) . Thus, the K user time-varying interference channel almost surely has K/2 degrees of freedom. Achievability is based on the idea of interference alignment. Examples are also pr...
Stochastic Power Adaptation with Multiagent Reinforcement Learning for Cognitive Wireless Mesh Networks As the scarce spectrum resource is becoming overcrowded, cognitive radio indicates great flexibility to improve the spectrum efficiency by opportunistically accessing the authorized frequency bands. One of the critical challenges for operating such radios in a network is how to efficiently allocate transmission powers and frequency resource among the secondary users (SUs) while satisfying the quality-of-service constraints of the primary users. In this paper, we focus on the noncooperative power allocation problem in cognitive wireless mesh networks formed by a number of clusters with the consideration of energy efficiency. Due to the SUs' dynamic and spontaneous properties, the problem is modeled as a stochastic learning process. We first extend the single-agent $(Q)$-learning to a multiuser context, and then propose a conjecture-based multiagent $(Q)$-learning algorithm to achieve the optimal transmission strategies with only private and incomplete information. An intelligent SU performs $(Q)$-function updates based on the conjecture over the other SUs' stochastic behaviors. This learning algorithm provably converges given certain restrictions that arise during the learning procedure. Simulation experiments are used to verify the performance of our algorithm and demonstrate its effectiveness of improving the energy efficiency.
Coordinated Beamforming for Multi-Cell MIMO-NOMA. In this letter, two novel coordinated beamforming techniques are developed to enhance the performance of non-orthogonal multiple access combined with multiple-input multiple-output communication in the presence of inter-cell interference. The proposed schemes successfully deal with inter-cell interference, and increase the cell-edge users' throughput, which in turn improves user fairness. In addition, they increase the number of served users, which makes them suitable for 5G networks where massive connectivity and higher spectral efficiency are required. Numerical results confirm the effectiveness of the proposed algorithms.
Distributed Resource Allocation for D2D Communications Underlaying Cellular Networks in Time-Varying Environment. In this letter, we address joint channel and power allocation in a device-to-device (D2D) network underlaying a cellular network in a time-varying environment. A fully distributed solution, which does not require information exchange, is proposed to allocate channel and power levels to D2D pairs while ensuring the quality of service (QoS) of the cellular user equipments (CUEs). The problem is modeled as a Stackelberg game with pricing. At the leader level, base station sets prices for the channels to ensure the QoS of the CUEs. At the follower level, D2D pairs use an uncoupled stochastic learning algorithm to learn the channel indices and power levels while minimizing the weighted aggregate interference and the price paid. The follower game is shown to be an ordinal potential game. We perform simulations to study the convergence of the algorithm.
Interference-Aware Online Distributed Channel Selection for Multicluster FANET: A Potential Game Approach The deployment of large-scale UAV clusters for cluster advantages will lead to high competition and excessive congestion of spectrum resources, which results in mutual interference. This paper investigates the interference-aware online spectrum access problem for multicluster Flying Ad-Hoc Network (FANET) under different network topologies, i.e., the locations of UAV clusters are varying. First, the problem is formulated as a data-assisted multistage channel access game with the goal of mitigating interference of all UAV clusters and decreasing channel switching cost during each slot. As we prove that the game is an exact potential game that guarantees at least one pure-strategy Nash equilibrium and interference-aware online channel preserving based concurrent best response (IOCPCBR) algorithm, an online distributed algorithm is proposed to achieve the desirable solution. Finally, the simulation results demonstrate the validity and effectiveness of the multistage channel access game as well as IOCPCBR algorithm.
Optimal distributed interference avoidance: potential game and learning. This article studies the problem of distributed interference avoidance (IA) through channel selection for distributed wireless networks, where mutual interference only occurs among nearby users. First, an interference graph is used to characterise the limited range of interference, and then the distributed IA problem is formulated as a graph colouring problem. Because solving the graph colouring problem is non-deterministic polynomial hard even in a centralised manner, the task of obtaining the optimal channel selection profile distributively is challenging. We formulate this problem as a channel selection game, which is proved to be an exact potential game with the weighted aggregate interference serving as the potential function. On the basis of this, a distributed learning algorithm is proposed to achieve the optimal channel selection profile that constitutes an optimal Nash equilibrium point of the channel selection game. The proposed learning algorithm is fully distributed because it needs information about neither the network topology nor the actions and the experienced interference of others. Simulation results show that the proposed potential game theoretic IA algorithm outperforms the existing algorithm because it minimises the aggregate weighted interference and achieves higher network rate. Copyright (c) 2012 John Wiley & Sons, Ltd.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Constrained Multiobjective Optimization for IoT-Enabled Computation Offloading in Collaborative Edge and Cloud Computing Internet-of-Things (IoT) applications are becoming more resource-hungry and latency-sensitive, which are severely constrained by limited resources of current mobile hardware. Mobile cloud computing (MCC) can provide abundant computation resources, while mobile-edge computing (MEC) aims to reduce the transmission latency by offloading complex tasks from IoT devices to nearby edge servers. It is sti...
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
Decentralized Plug-in Electric Vehicle Charging Selection Algorithm in Power Systems This paper uses a charging selection concept for plug-in electric vehicles (PEVs) to maximize user convenience levels while meeting predefined circuit-level demand limits. The optimal PEV-charging selection problem requires an exhaustive search for all possible combinations of PEVs in a power system, which cannot be solved for the practical number of PEVs. Inspired by the efficiency of the convex relaxation optimization tool in finding close-to-optimal results in huge search spaces, this paper proposes the application of the convex relaxation optimization method to solve the PEV-charging selection problem. Compared with the results of the uncontrolled case, the simulated results indicate that the proposed PEV-charging selection algorithm only slightly reduces user convenience levels, but significantly mitigates the impact of the PEV-charging on the power system. We also develop a distributed optimization algorithm to solve the PEV-charging selection problem in a decentralized manner, i.e., the binary charging decisions (charged or not charged) are made locally by each vehicle. Using the proposed distributed optimization algorithm, each vehicle is only required to report its power demand rather than report several of its private user state information, mitigating the security problems inherent in such problem. The proposed decentralized algorithm only requires low-speed communication capability, making it suitable for real-time implementation.
Collaborative Mobile Charging The limited battery capacity of sensor nodes has become one of the most critical impediments that stunt the deployment of wireless sensor networks (WSNs). Recent breakthroughs in wireless energy transfer and rechargeable lithium batteries provide a promising alternative to power WSNs: mobile vehicles/robots carrying high volume batteries serve as mobile chargers to periodically deliver energy to sensor nodes. In this paper, we consider how to schedule multiple mobile chargers to optimize energy usage effectiveness, such that every sensor will not run out of energy. We introduce a novel charging paradigm, collaborative mobile charging, where mobile chargers are allowed to intentionally transfer energy between themselves. To provide some intuitive insights into the problem structure, we first consider a scenario that satisfies three conditions, and propose a scheduling algorithm, PushWait, which is proven to be optimal and can cover a one-dimensional WSN of infinite length. Then, we remove the conditions one by one, investigating chargers' scheduling in a series of scenarios ranging from the most restricted one to a general 2D WSN. Through theoretical analysis and simulations, we demonstrate the advantages of the proposed algorithms in energy usage effectiveness and charging coverage.
Flymap: Interacting With Maps Projected From A Drone Interactive maps have become ubiquitous in our daily lives, helping us reach destinations and discovering our surroundings. Yet, designing map interactions is not straightforward and depends on the device being used. As mobile devices evolve and become independent from users, such as with robots and drones, how will we interact with the maps they provide? We propose FlyMap as a novel user experience for drone-based interactive maps. We designed and developed three interaction techniques for FlyMap's usage scenarios. In a comprehensive indoor study (N = 16), we show the strengths and weaknesses of two techniques on users' cognition, task load, and satisfaction. FlyMap was then pilot tested with the third technique outdoors in real world conditions with four groups of participants (N = 13). We show that FlyMap's interactivity is exciting to users and opens the space for more direct interactions with drones.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.066667
0.066667
0.066667
0.066667
0.066667
0.066667
0.016667
0
0
0
0
0
0
0
RFID-based localization and tracking technologies. Radio frequency identification usually incorporates a tag into an object for the purpose of identification or localization using radio signals. It has gained much attention recently due to its advantages in terms of low cost and ease of deployment. This article presents an overview of RFID-based localization and tracking technologies, including tag-based (e.g., LANDMARC), reader-based (e.g., rever...
Device self-calibration in location systems using signal strength histograms Received signal strength RSS fingerprinting is an attractive solution for indoor positioning using Wireless Local Area Network WLAN due to the wide availability of WLAN access points and the ease of monitoring RSS measurements on WLAN-enabled mobile devices. Fingerprinting systems rely on a radiomap collected using a reference device inside the localisation area; however, a major limitation is that the quality of the location information can be degraded if the user carries a different device. This is because diverse devices tend to report the RSS values very differently for a variety of reasons. To ensure compatibility with the existing radiomap, we propose a self-calibration method that attains a good mapping between the reference and user devices using RSS histograms. We do so by relating the RSS histogram of the reference device, which is deduced from the radiomap, and the RSS histogram of the user device, which is updated concurrently with positioning. Unlike other approaches, our calibration method does not require any user intervention, e.g. collecting calibration data using the new device prior to positioning. Experimental results with five smartphones in a real indoor environment demonstrate the effectiveness of the proposed method and indicate that it is more robust to device diversity compared with other calibration methods in the literature.
Advanced real-time indoor tracking based on the Viterbi algorithm and semantic data AbstractA real-time indoor tracking system based on the Viterbi algorithm is developed. This Viterbi principle is used in combination with semantic data to improve the accuracy, that is, the environment of the object that is being tracked and a motion model. The starting point is a fingerprinting technique for which an advanced network planner is used to automatically construct the radio map, avoiding a time consuming measurement campaign. The developed algorithm was verified with simulations and with experiments in a building-wide testbed for sensor experiments, where a median accuracy below 2m was obtained. Compared to a reference algorithm without Viterbi or semantic data, the results indicated a significant improvement: the mean accuracy and standard deviation improved by, respectively, 26.1% and 65.3%. Thereafter a sensitivity analysis was conducted to estimate the influence of node density, grid size, memory usage, and semantic data on the performance.
Coverage prediction and optimization algorithms for indoor environments. A heuristic algorithm is developed for the prediction of indoor coverage. Measurements on one floor of an office building are performed to investigate propagation characteristics and validations with very limited additional tuning are performed on another floor of the same building and in three other buildings. The prediction method relies on the free-space loss model for every environment, this way intending to reduce the dependency of the model on the environment upon which the model is based, as is the case with many other models. The applicability of the algorithm to a wireless testbed network with fixed WiFi 802.11b/g nodes is discussed based on a site survey. The prediction algorithm can easily be implemented in network planning algorithms, as will be illustrated with a network reduction and a network optimization algorithm. We aim to provide an physically intuitive, yet accurate prediction of the path loss for different building types.
A Robust Crowdsourcing-Based Indoor Localization System. WiFi fingerprinting-based indoor localization has been widely used due to its simplicity and can be implemented on the smartphones. The major drawback of WiFi fingerprinting is that the radio map construction is very labor-intensive and time-consuming. Another drawback of WiFi fingerprinting is the Received Signal Strength (RSS) variance problem, caused by environmental changes and device diversity. RSS variance severely degrades the localization accuracy. In this paper, we propose a robust crowdsourcing-based indoor localization system (RCILS). RCILS can automatically construct the radio map using crowdsourcing data collected by smartphones. RCILS abstracts the indoor map as the semantics graph in which the edges are the possible user paths and the vertexes are the location where users may take special activities. RCILS extracts the activity sequence contained in the trajectories by activity detection and pedestrian dead-reckoning. Based on the semantics graph and activity sequence, crowdsourcing trajectories can be located and a radio map is constructed based on the localization results. For the RSS variance problem, RCILS uses the trajectory fingerprint model for indoor localization. During online localization, RCILS obtains an RSS sequence and realizes localization by matching the RSS sequence with the radio map. To evaluate RCILS, we apply RCILS in an office building. Experiment results demonstrate the efficiency and robustness of RCILS.
Fast and Accurate Estimation of RFID Tags Radio frequency identification (RFID) systems have been widely deployed for various applications such as object tracking, 3-D positioning, supply chain management, inventory control, and access control. This paper concerns the fundamental problem of estimating RFID tag population size, which is needed in many applications such as tag identification, warehouse monitoring, and privacy-sensitive RFID systems. In this paper, we propose a new scheme for estimating tag population size called Average Run-based Tag estimation (ART). The technique is based on the average run length of ones in the bit string received using the standardized framed slotted Aloha protocol. ART is significantly faster than prior schemes. For example, given a required confidence interval of 0.1% and a required reliability of 99.9%, ART is consistently 7 times faster than the fastest existing schemes (UPE and EZB) for any tag population size. Furthermore, ART's estimation time is provably independent of the tag population sizes. ART works with multiple readers with overlapping regions and can estimate sizes of arbitrarily large tag populations. ART is easy to deploy because it neither requires modification to tags nor to the communication protocol between tags and readers. ART only needs to be implemented on readers as a software module.
Image Feature Extraction in Encrypted Domain With Privacy-Preserving SIFT Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications.
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users...
A bayesian network approach to traffic flow forecasting A new approach based on Bayesian networks for traffic flow forecasting is proposed. In this paper, traffic flows among adjacent road links in a transportation network are modeled as a Bayesian network. The joint probability distribution between the cause nodes (data utilized for forecasting) and the effect node (data to be forecasted) in a constructed Bayesian network is described as a Gaussian mixture model (GMM) whose parameters are estimated via the competitive expectation maximization (CEM) algorithm. Finally, traffic flow forecasting is performed under the criterion of minimum mean square error (mmse). The approach departs from many existing traffic flow forecasting models in that it explicitly includes information from adjacent road links to analyze the trends of the current link statistically. Furthermore, it also encompasses the issue of traffic flow forecasting when incomplete data exist. Comprehensive experiments on urban vehicular traffic flow data of Beijing and comparisons with several other methods show that the Bayesian network is a very promising and effective approach for traffic flow modeling and forecasting, both for complete data and incomplete data
State resetting for bumpless switching in supervisory control In this paper the realization and implementation of a multi-controller scheme made of a finite set of linear single-input-single-output controllers, possibly having different state dimensions, is studied. The supervisory control framework is considered, namely a minimal parameter dependent realization of the set of controllers such that all controllers share the same state space is used. A specific state resetting strategy based on the behavioral approach to system theory is developed in order to master the transient upon controller switching.
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.078667
0.08
0.08
0.04
0.026667
0.006667
0.000444
0
0
0
0
0
0
0
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey. This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.
Network Slicing in 5G: Survey and Challenges. 5G is envisioned to be a multi-service network supporting a wide range of verticals with a diverse set of performance and service requirements. Slicing a single physical network into multiple isolated logical networks has emerged as a key to realizing this vision. This article is meant to act as a survey, the first to the authors¿ knowledge, on this topic of prime interest. We begin by reviewing the state of the art in 5G network slicing and present a framework for bringing together and discussing existing work in a holistic manner. Using this framework, we evaluate the maturity of current proposals and identify a number of open research questions.
Deep Reinforcement Learning for Mobile Edge Caching: Review, New Features, and Open Issues. Mobile edge caching is a promising technique to reduce network traffic and improve the quality of experience of mobile users. However, mobile edge caching is a challenging decision making problem with unknown future content popularity and complex network characteristics. In this article, we advocate the use of DRL to solve mobile edge caching problems by presenting an overview of recent works on m...
On Service Resilience in Cloud-Native 5G Mobile Systems. To cope with the tremendous growth in mobile data traffic on one hand, and the modest average revenue per user on the other hand, mobile operators have been exploring network virtualization and cloud computing technologies to build cost-efficient and elastic mobile networks and to have them offered as a cloud service. In such cloud-based mobile networks, ensuring service resilience is an important challenge to tackle. Indeed, high availability and service reliability are important requirements of carrier grade, but not necessarily intrinsic features of cloud computing. Building a system that requires the five nines reliability on a platform that may not always grant it is, therefore, a hurdle. Effectively, in carrier cloud, service resilience can be heavily impacted by a failure of any network function (NF) running on a virtual machine (VM). In this paper, we introduce a framework, along with efficient and proactive restoration mechanisms, to ensure service resilience in carrier cloud. As restoration of a NF failure impacts a potential number of users, adequate network overload control mechanisms are also proposed. A mathematical model is developed to evaluate the performance of the proposed mechanisms. The obtained results are encouraging and demonstrate that the proposed mechanisms efficiently achieve their design goals.
A Tutorial on Ultrareliable and Low-Latency Communications in 6G: Integrating Domain Knowledge Into Deep Learning As one of the key communication scenarios in the fifth-generation and also the sixth-generation (6G) mobile communication networks, ultrareliable and low-latency communications (URLLCs) will be central for the development of various emerging mission-critical applications. State-of-the-art mobile communication systems do not fulfill the end-to-end delay and overall reliability requirements of URLLCs. In particular, a holistic framework that takes into account latency, reliability, availability, scalability, and decision-making under uncertainty is lacking. Driven by recent breakthroughs in deep neural networks, deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLCs in future 6G networks. This tutorial illustrates how domain knowledge (models, analytical tools, and optimization frameworks) of communications and networking can be integrated into different kinds of deep learning algorithms for URLLCs. We first provide some background of URLLCs and review promising network architectures and deep learning frameworks for 6G. To better illustrate how to improve learning algorithms with domain knowledge, we revisit model-based analytical tools and cross-layer optimization frameworks for URLLCs. Following this, we examine the potential of applying supervised/unsupervised deep learning and deep reinforcement learning in URLLCs and summarize related open problems. Finally, we provide simulation and experimental results to validate the effectiveness of different learning algorithms and discuss future directions.
Resource Slicing in Virtual Wireless Networks: A Survey. New architectural and design approaches for radio access networks have appeared with the introduction of network virtualization in the wireless domain. One of these approaches splits the wireless network infrastructure into isolated virtual slices under their own management, requirements, and characteristics. Despite the advances in wireless virtualization, there are still many open issues regarding the resource allocation and isolation of wireless slices. Because of the dynamics and shared nature of the wireless medium, guaranteeing that the traffic on one slice will not affect the traffic on the others has proven to be difficult. In this paper, we focus on the detailed definition of the problem, discussing its challenges. We also provide a review of existing works that deal with the problem, analyzing how new trends such as software defined networking and network function virtualization can assist in the slicing. We will finally describe some research challenges on this topic.
Deep Learning in Mobile and Wireless Networking: A Survey. The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.
Joint Mobile Data Collection and Wireless Energy Transfer in Wireless Rechargeable Sensor Networks. In wireless rechargeable sensor networks (WRSNs), there is a way to use mobile vehicles to charge node and collect data. It is a rational pattern to use two types of vehicles, one is for energy charging, and the other is for data collecting. These two types of vehicles, data collection vehicles (DCVs) and wireless charging vehicles (WCVs), are employed to achieve high efficiency in both data gathering and energy consumption. To handle the complex scheduling problem of multiple vehicles in large-scale networks, a twice-partition algorithm based on center points is proposed to divide the network into several parts. In addition, an anchor selection algorithm based on the tradeoff between neighbor amount and residual energy, named AS-NAE, is proposed to collect the zonal data. It can reduce the data transmission delay and the energy consumption for DCVs' movement in the zonal. Besides, we design an optimization function to achieve maximum data throughput by adjusting data rate and link rate of each node. Finally, the effectiveness of proposed algorithm is validated by numerical simulation results in WRSNs.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Argos: practical many-antenna base stations Multi-user multiple-input multiple-output theory predicts manyfold capacity gains by leveraging many antennas on wireless base stations to serve multiple clients simultaneously through multi-user beamforming (MUBF). However, realizing a base station with a large number antennas is non-trivial, and has yet to be achieved in the real-world. We present the design, realization, and evaluation of Argos, the first reported base station architecture that is capable of serving many terminals simultaneously through MUBF with a large number of antennas (M >> 10). Designed for extreme flexibility and scalability, Argos exploits hierarchical and modular design principles, properly partitions baseband processing, and holistically considers real-time requirements of MUBF. Argos employs a novel, completely distributed, beamforming technique, as well as an internal calibration procedure to enable implicit beamforming with channel estimation cost independent of the number of base station antennas. We report an Argos prototype with 64 antennas and capable of serving 15 clients simultaneously. We experimentally demonstrate that by scaling from 1 to 64 antennas the prototype can achieve up to 6.7 fold capacity gains while using a mere 1/64th of the transmission power.
Mathematical Evaluation of Environmental Monitoring Estimation Error through Energy-Efficient Wireless Sensor Networks In this paper, the estimation of a scalar field over a bidimensional scenario (e.g., the atmospheric pressure in a wide area) through a self-organizing wireless sensor network (WSN) with energy constraints is investigated. The sensor devices (denoted as nodes) are randomly distributed; they transmit samples to a supervisor by using a clustered network. This paper provides a mathematical framework to analyze the interdependent aspects of WSN communication protocol and signal processing design. Channel modelling and connectivity issues, multiple access control and routing, and the role of distributed digital signal processing (DDSP) techniques are accounted for. The possibility that nodes perform DDSP is studied through a distributed compression technique based on signal resampling. The DDSP impact on network energy efficiency is compared through a novel mathematical approach to the case where the processing is performed entirely by the supervisor. The trade-off between energy conservation (i.e., network lifetime) and estimation error is discussed and a design criterion is proposed as well. Comparison to simulation outcomes validates the model. As an example result, the required node density is found as a trade-off between estimation quality and network lifetime for different system parameters and scalar field characteristics. It is shown that both the DDSP technique and the MAC protocol choice have a relevant impact on the performance of a WSN.
Spatial augmented reality as a method for a mobile robot to communicate intended movement. •Communication strategies are to allow robots to convey upcoming movements to humans.•Arrows for conveying direction of movement are understood by humans.•Simple maps depicting a sequence of upcoming movements are useful to humans.•Robots projecting arrows and a map can effectively communicate upcoming movement.
Orientation-aware RFID tracking with centimeter-level accuracy. RFID tracking attracts a lot of research efforts in recent years. Most of the existing approaches, however, adopt an orientation-oblivious model. When tracking a target whose orientation changes, those approaches suffer from serious accuracy degradation. In order to achieve target tracking with pervasive applicability in various scenarios, we in this paper propose OmniTrack, an orientation-aware RFID tracking approach. Our study discovers the linear relationship between the tag orientation and the phase change of the backscattered signals. Based on this finding, we propose an orientation-aware phase model to explicitly quantify the respective impact of the read-tag distance and the tag's orientation. OmniTrack addresses practical challenges in tracking the location and orientation of a mobile tag. Our experimental results demonstrate that OmniTrack achieves centimeter-level location accuracy and has significant advantages in tracking targets with varing orientations, compared to the state-of-the-art approaches.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.051875
0.051
0.05
0.05
0.05
0.0275
0.01
0.0005
0
0
0
0
0
0
Multiperiod Asset Allocation Considering Dynamic Loss Aversion Behavior of Investors In order to study the effect of loss aversion behavior on multiperiod investment decisions, we first introduce some psychological characteristics of dynamic loss aversion and then construct a multiperiod portfolio model by considering a conditional value-at-risk (CVaR) constraint. We then design a variable neighborhood search-based hybrid genetic algorithm to solve the model. We finally study the optimal asset allocation and investment performance of the proposed multiperiod model. Some important metrics, such as the initial loss aversion coefficient and reference point, are used to test the robustness of the model. The result shows that investors with loss aversion tend to centralize most of their wealth and have a better performance than rational investors. The effects of CVaR on investment performance are given. When a market is falling, investors with a higher degree of risk aversion can avoid a large loss and can obtain higher gains.
People detection and tracking from aerial thermal views Detection and tracking of people in visible-light images has been subject to extensive research in the past decades with applications ranging from surveillance to search-and-rescue. Following the growing availability of thermal cameras and the distinctive thermal signature of humans, research effort has been focusing on developing people detection and tracking methodologies applicable to this sensing modality. However, a plethora of challenges arise on the transition from visible-light to thermal images, especially with the recent trend of employing thermal cameras onboard aerial platforms (e.g. in search-and-rescue research) capturing oblique views of the scenery. This paper presents a new, publicly available dataset of annotated thermal image sequences, posing a multitude of challenges for people detection and tracking. Moreover, we propose a new particle filter based framework for tracking people in aerial thermal images. Finally, we evaluate the performance of this pipeline on our dataset, incorporating a selection of relevant, state-of-the-art methods and present a comprehensive discussion of the merits spawning from our study.
Pareto-Optimization for Scheduling of Crude Oil Operations in Refinery via Genetic Algorithm. With the interaction of discrete-event and continuous processes, it is challenging to schedule crude oil operations in a refinery. This paper studies the optimization problem of finding a detailed schedule to realize a given refining schedule. This is a multiobjective optimization problem with a combinatorial nature. Since the original problem cannot be directly solved by using heuristics and meta-heuristics, the problem is transformed into an assignment problem of charging tanks and distillers. Based on such a transformation, by analyzing the properties of the problem, this paper develops a chromosome that can describe a feasible schedule such that meta-heuristics can be applied. Then, it innovatively adopts an improved nondominated sorting genetic algorithm to solve the problem for the first time. An industrial case study is used to test the proposed solution method. The results show that the method makes a significant performance improvement and is applicable to real-life refinery scheduling problems.
The unmanned aerial vehicle routing and trajectory optimisation problem, a taxonomic review. •We introduce the UAV Routing and Trajectory Optimisation Problem.•We provide a taxonomy for UAV routing, TO and other variants.•We apply the proposed taxonomy to 70 scientific articles.•A lack of research about integrating UAV routing and TO is identified.
A vehicle routing problem arising in unmanned aerial monitoring. •We consider a routing problem arising in unmanned aerial monitoring.•The problem possesses several military and civil applications.•The problem is to construct vehicle routes and to select monitoring heights.•We model the problem as an integer linear program and solve it by tabu search.•Extensive tests confirm the efficiency of the heuristic.
Cooperative Aerial-Ground Vehicle Route Planning With Fuel Constraints for Coverage Applications. Low-cost unmanned aerial vehicles (UAVs) need multiple refuels to accomplish large area coverage. We propose the use of a mobile ground vehicle (GV), constrained to travel on a given road network, as a refueling station for the UAV. Determining optimal routes for a UAV and GV, and selecting rendezvous locations for refueling to minimize coverage time is NP-hard. We develop a two-stage strategy for...
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Collaborative privacy management The landscape of the World Wide Web with all its versatile services heavily relies on the disclosure of private user information. Unfortunately, the growing amount of personal data collected by service providers poses a significant privacy threat for Internet users. Targeting growing privacy concerns of users, privacy-enhancing technologies emerged. One goal of these technologies is the provision of tools that facilitate a more informative decision about personal data disclosures. A famous PET representative is the PRIME project that aims for a holistic privacy-enhancing identity management system. However, approaches like the PRIME privacy architecture require service providers to change their server infrastructure and add specific privacy-enhancing components. In the near future, service providers are not expected to alter internal processes. Addressing the dependency on service providers, this paper introduces a user-centric privacy architecture that enables the provider-independent protection of personal data. A central component of the proposed privacy infrastructure is an online privacy community, which facilitates the open exchange of privacy-related information about service providers. We characterize the benefits and the potentials of our proposed solution and evaluate a prototypical implementation.
Cognitive Cars: A New Frontier for ADAS Research This paper provides a survey of recent works on cognitive cars with a focus on driver-oriented intelligent vehicle motion control. The main objective here is to clarify the goals and guidelines for future development in the area of advanced driver-assistance systems (ADASs). Two major research directions are investigated and discussed in detail: 1) stimuli–decisions–actions, which focuses on the driver side, and 2) perception enhancement–action-suggestion–function-delegation, which emphasizes the ADAS side. This paper addresses the important achievements and major difficulties of each direction and discusses how to combine the two directions into a single integrated system to obtain safety and comfort while driving. Other related topics, including driver training and infrastructure design, are also studied.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
A Hierarchical Fused Fuzzy Deep Neural Network for Data Classification. Deep learning (DL) is an emerging and powerful paradigm that allows large-scale task-driven feature learning from big data. However, typical DL is a fully deterministic model that sheds no light on data uncertainty reductions. In this paper, we show how to introduce the concepts of fuzzy learning into DL to overcome the shortcomings of fixed representation. The bulk of the proposed fuzzy system is...
Fast learning neural networks using Cartesian genetic programming A fast learning neuroevolutionary algorithm for both feedforward and recurrent networks is proposed. The method is inspired by the well known and highly effective Cartesian genetic programming (CGP) technique. The proposed method is called the CGP-based Artificial Neural Network (CGPANN). The basic idea is to replace each computational node in CGP with an artificial neuron, thus producing an artificial neural network. The capabilities of CGPANN are tested in two diverse problem domains. Firstly, it has been tested on a standard benchmark control problem: single and double pole for both Markovian and non-Markovian cases. Results demonstrate that the method can generate effective neural architectures in substantially fewer evaluations in comparison to previously published neuroevolutionary techniques. In addition, the evolved networks show improved generalization and robustness in comparison with other techniques. Secondly, we have explored the capabilities of CGPANNs for the diagnosis of Breast Cancer from the FNA (Finite Needle Aspiration) data samples. The results demonstrate that the proposed algorithm gives 99.5% accurate results, thus making it an excellent choice for pattern recognitions in medical diagnosis, owing to its properties of fast learning and accuracy. The power of a CGP based ANN is its representation which leads to an efficient evolutionary search of suitable topologies. This opens new avenues for applying the proposed technique to other linear/non-linear and Markovian/non-Markovian control and pattern recognition problems.
Designing adaptive humanoid robots through the FARSA open-source framework We introduce FARSA, an open-source Framework for Autonomous Robotics Simulation and Analysis, that allows us to easily set up and carry on adaptive experiments involving complex robot/environmental models. Moreover, we show how a simulated iCub robot can be trained, through an evolutionary algorithm, to display reaching and integrated reaching and grasping behaviours. The results demonstrate how the use of an implicit selection criterion, estimating the extent to which the robot is able to produce the expected outcome without specifying the manner through which the action should be realized, is sufficient to develop the required capabilities despite the complexity of the robot and of the task.
Improving reporting delay and lifetime of a WSN using controlled mobile sinks Wireless sensor networks (WSNs) are characterized by many to one traffic pattern, where a large number of nodes communicate their sensed data to the sink node. Due to heavy data traffic near the sink node, the nodes closer to sink node tends to exhaust their energy faster compared to those nodes which are situated away from the sink. This may lead to the fragment of a network due to the early demise of sensor nodes situated closer to the sink. To pacify this problem, mobile sinks are proposed for WSNs. Mobile sinks are capable to provide uniform energy consumption, load distribution, low reporting delay and quick data delivery paths. However, the position of the mobile sink needs to be updated regularly as such position update messages may reduce the network lifetime. In this paper, we propose a novel Location Aware Routing for Controlled Mobile Sinks (LARCMS), which will help in minimizing reporting delay, enhancing network lifetime, handling sink position updates and providing uniform energy consumption. The proposed technique uses two mobile sinks in predefined trajectory for data collection and provides better results compared to existing techniques. The performance of LARCMS is evaluated by comparing with similar mobile sink routing protocols through extensive simulations in MATLAB.
Multiobjective Evolution of Fuzzy Rough Neural Network via Distributed Parallelism for Stock Prediction Fuzzy rough theory can describe real-world situations in a mathematically effective and interpretable way, while evolutionary neural networks can be utilized to solve complex problems. Combining them with these complementary capabilities may lead to evolutionary fuzzy rough neural network with the interpretability and prediction capability. In this article, we propose modifications to the existing models of fuzzy rough neural network and then develop a powerful evolutionary framework for fuzzy rough neural networks by inheriting the merits of both the aforementioned systems. We first introduce rough neurons and enhance the consequence nodes, and further integrate the interval type-2 fuzzy set into the existing fuzzy rough neural network model. Thus, several modified fuzzy rough neural network models are proposed. While simultaneously considering the objectives of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">prediction precision</italic> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">network simplicity</italic> , each model is transformed into a multiobjective optimization problem by encoding the structure, membership functions, and the parameters of the network. To solve these optimization problems, distributed parallel multiobjective evolutionary algorithms are proposed. We enhance the optimization processes with several measures including optimizer replacement and parameter adaption. In the distributed parallel environment, the tedious and time-consuming neural network optimization can be alleviated by numerous computational resources, significantly reducing the computational time. Through experimental verification on complex stock time series prediction tasks, the proposed optimization algorithms and the modified fuzzy rough neural network models exhibit significant improvements the existing fuzzy rough neural network and the long short-term memory network.
Weighted Rendezvous Planning on Q-Learning Based Adaptive Zone Partition with PSO Based Optimal Path Selection Nowadays, wireless sensor network (WSN) has emerged as the most developed research area. Different research have been demonstrated for reducing the sensor nodes’ energy consumption with mobile sink in WSN. But, such approaches were dependent on the path selected by the mobile sink since all sensed data should be gathered within the given time constraint. Therefore, in this article, the issue of an optimal path selection is solved when multiple mobile sinks are considered in WSN. In the initial stage, Q-learning based Adaptive Zone Partition method is applied to split the network into smaller zones. In each zone, the location and residual energy of nodes are transmitted to the mobile sinks through Mobile Anchor. Moreover, Weighted Rendezvous Planning is proposed to assign a weight to every node according to its hop distance. The collected data packets are transmitted to the mobile sink node within the given delay bound by means of a designated set of rendezvous points (RP). Then, an optimal path from RP to mobile sink is selected utilizing the particle swarm optimization algorithm which is applied during routing process. Experimental results demonstrated the effectiveness of the proposed approach where the network lifetime is increased by the reduction of energy consumption in multihop transmission.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
An introduction to ROC analysis Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
SUPER: A Novel Lane Detection System AI-based lane detection algorithms were actively studied over the last few years. Many have demonstrated superior performance compared with traditional feature-based methods. However, most methods remain riddled with assumptions and limitations, still not good enough for safe and reliable driving in the real world. In this paper, we propose a novel lane detection system, called Scene Understanding...
Generative Adversarial Networks for Parallel Transportation Systems. Generative Adversaria Networks (GANs) have emerged as a promising and effective mechanism for machine learning due to its recent successful applications. GANs share the same idea of producing, testing, acquiring, and utilizing data as well as knowledge based on artificial systems, computational experiments, and parallel execution of actual and virtual scenarios, as outlined in the theory of parall...
Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges AbstractRecent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of “what to fuse”, “when to fuse”, and “how to fuse” remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
Enhanced Object Detection With Deep Convolutional Neural Networks for Advanced Driving Assistance Object detection is a critical problem for advanced driving assistance systems (ADAS). Recently, convolutional neural networks (CNN) achieved large successes on object detection, with performance improvement over traditional approaches, which use hand-engineered features. However, due to the challenging driving environment (e.g., large object scale variation, object occlusion, and bad light conditions), popular CNN detectors do not achieve very good object detection accuracy over the KITTI autonomous driving benchmark dataset. In this paper, we propose three enhancements for CNN-based visual object detection for ADAS. To address the large object scale variation challenge, deconvolution and fusion of CNN feature maps are proposed to add context and deeper features for better object detection at low feature map scales. In addition, soft non-maximal suppression (NMS) is applied across object proposals at different feature scales to address the object occlusion challenge. As the cars and pedestrians have distinct aspect ratio features, we measure their aspect ratio statistics and exploit them to set anchor boxes properly for better object matching and localization. The proposed CNN enhancements are evaluated with various image input sizes by experiments over KITTI dataset. The experimental results demonstrate the effectiveness of the proposed enhancements with good detection performance over KITTI test set.
Traffic Flow Imputation Using Parallel Data and Generative Adversarial Networks Traffic data imputation is critical for both research and applications of intelligent transportation systems. To develop traffic data imputation models with high accuracy, traffic data must be large and diverse, which is costly. An alternative is to use synthetic traffic data, which is cheap and easy-access. In this paper, we propose a novel approach using parallel data and generative adversarial networks (GANs) to enhance traffic data imputation. Parallel data is a recently proposed method of using synthetic and real data for data mining and data-driven process, in which we apply GANs to generate synthetic traffic data. As it is difficult for the standard GAN algorithm to generate time-dependent traffic flow data, we made twofold modifications: 1) using the real data or the corrupted ones instead of random vectors as latent codes to generator within GANs and 2) introducing a representation loss to measure discrepancy between the synthetic data and the real data. The experimental results on a real traffic dataset demonstrate that our method can significantly improve the performance of traffic data imputation.
ParaUDA: Invariant Feature Learning With Auxiliary Synthetic Samples for Unsupervised Domain Adaptation Recognizing and locating objects by algorithms are essential and challenging issues for Intelligent Transportation Systems. However, the increasing demand for much labeled data hinders the further application of deep learning-based object detection. One of the optimal solutions is to train the target model with an existing dataset and then adapt it to new scenes, namely Unsupervised Domain Adaptation (UDA). However, most of existing methods at the pixel level mainly focus on adapting the model from source domain to target domain and ignore the essence of UDA to learn domain-invariant feature learning. Meanwhile, almost all methods at the feature level ignore to make conditional distributions matched for UDA while conducting feature alignment between source and target domain. Considering these problems, this paper proposes the ParaUDA, a novel framework of learning invariant representations for UDA in two aspects: pixel level and feature level. At the pixel level, we adopt CycleGAN to conduct domain transfer and convert the problem of original unsupervised domain adaptation to supervised domain adaptation. At the feature level, we adopt an adversarial adaption model to learn domain-invariant representation by aligning the distributions of domains between different image pairs with same mixture distributions. We evaluate our proposed framework in different scenes, from synthetic scenes to real scenes, from normal weather to challenging weather, and from scenes across cameras. The results of all the above experiments show that ParaUDA is effective and robust for adapting object detection models from source scenes to target scenes.
China's 12-Year Quest of Autonomous Vehicular Intelligence: The Intelligent Vehicles Future Challenge Program In this article, we introduce the Intelligent Vehicles Future Challenge of China (IVFC), which has lasted 12 years. Some key features of the tests and a few interesting findings of IVFC are selected and presented. Through the IVFCs held between 2009 and 2020, we gradually established a set of theories, methods, and tools to collect tests? data and efficiently evaluate the performance of autonomous vehicles so that we could learn how to improve both the autonomous vehicles and the testing system itself.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
A comparative study of texture measures with classification based on featured distributions This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented
Social Perception and Steering for Online Avatars This paper presents work on a new platform for producing realistic group conversation dynamics in shared virtual environments. An avatar, representing users, should perceive the surrounding social environment just as humans would, and use the perceptual information for driving low level reactive behaviors. Unconscious reactions serve as evidence of life, and can also signal social availability and spatial awareness to others. These behaviors get lost when avatar locomotion requires explicit user control. For automating such behaviors we propose a steering layer in the avatars that manages a set of prioritized behaviors executed at different frequencies, which can be activated or deactivated and combined together. This approach gives us enough flexibility to model the group dynamics of social interactions as a set of social norms that activate relevant steering behaviors. A basic set of behaviors is described for conversations, some of which generate a social force field that makes the formation of conversation groups fluidly adapt to external and internal noise, through avatar repositioning and reorientations. The resulting social group behavior appears relatively robust, but perhaps more importantly, it starts to bring a new sense of relevance and continuity to the virtual bodies that often get separated from the ongoing conversation in the chat window.
Node Reclamation and Replacement for Long-Lived Sensor Networks When deployed for long-term tasks, the energy required to support sensor nodes' activities is far more than the energy that can be preloaded in their batteries. No matter how the battery energy is conserved, once the energy is used up, the network life terminates. Therefore, guaranteeing long-term energy supply has persisted as a big challenge. To address this problem, we propose a node reclamation and replacement (NRR) strategy, with which a mobile robot or human labor called mobile repairman (MR) periodically traverses the sensor network, reclaims nodes with low or no power supply, replaces them with fully charged ones, and brings the reclaimed nodes back to an energy station for recharging. To effectively and efficiently realize the strategy, we develop an adaptive rendezvous-based two-tier scheduling scheme (ARTS) to schedule the replacement/reclamation activities of the MR and the duty cycles of nodes. Extensive simulations have been conducted to verify the effectiveness and efficiency of the ARTS scheme.
Haptic feedback for enhancing realism of walking simulations. In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.
Vehicular Sensing Networks in a Smart City: Principles, Technologies and Applications. Given the escalating population across the globe, it has become paramount to construct smart cities, aiming for improving the management of urban flows relying on efficient information and communication technologies (ICT). Vehicular sensing networks (VSNs) play a critical role in maintaining the efficient operation of smart cities. Naturally, there are numerous challenges to be solved before the w...
Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem Group scheduling problems have attracted much attention owing to their many practical applications. This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time. It is originated from an important industrial process, i.e., wire rod and bar rolling process in steel production systems. Two objecti...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0.006897
0
0
0
0
0
0
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Combined invariants to similarity transformation and to blur using orthogonal Zernike moments. The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better.
Fast computation of Jacobi-Fourier moments for invariant image recognition The Jacobi-Fourier moments (JFMs) provide a wide class of orthogonal rotation invariant moments (ORIMs) which are useful for many image processing, pattern recognition and computer vision applications. They, however, suffer from high time complexity and numerical instability at high orders of moment. In this paper, a fast method based on the recursive computation of radial kernel function of JFMs is proposed which not only reduces time complexity but also improves their numerical stability. Fast recursive method for the computation of Jacobi-Fourier moments is proposed.The proposed method not only reduces time complexity but also improves numerical stability of moments.Better image reconstruction is achieved with lower reconstruction error.Proposed method is useful for many image processing, pattern recognition and computer vision applications.
Radial shifted Legendre moments for image analysis and invariant image recognition. The rotation, scaling and translation invariant property of image moments has a high significance in image recognition. Legendre moments as a classical orthogonal moment have been widely used in image analysis and recognition. Since Legendre moments are defined in Cartesian coordinate, the rotation invariance is difficult to achieve. In this paper, we first derive two types of transformed Legendre polynomial: substituted and weighted radial shifted Legendre polynomials. Based on these two types of polynomials, two radial orthogonal moments, named substituted radial shifted Legendre moments and weighted radial shifted Legendre moments (SRSLMs and WRSLMs) are proposed. The proposed moments are orthogonal in polar coordinate domain and can be thought as generalized and orthogonalized complex moments. They have better image reconstruction performance, lower information redundancy and higher noise robustness than the existing radial orthogonal moments. At last, a mathematical framework for obtaining the rotation, scaling and translation invariants of these two types of radial shifted Legendre moments is provided. Theoretical and experimental results show the superiority of the proposed methods in terms of image reconstruction capability and invariant recognition accuracy under both noisy and noise-free conditions.
Robust circularly orthogonal moment based on Chebyshev rational function. The circularly orthogonal moments have been widely used in many computer vision applications. Unfortunately, they suffer from two errors namely numerical integration error and geometric error, which heavily degrade their reconstruction accuracy and pattern recognition performance. This paper describes a new kind of circularly orthogonal moments based on Chebyshev rational function. Unlike the conventional circularly orthogonal moments which have been defined in a unit disk, the proposed moment is defined in whole polar coordinates domain. In addition, given an order n, its radial projection function is smoother and oscillates at lower frequency compared with the existing circularly orthogonal moments, and so it is free of the geometric error and highly robust to the numerical integration error. Experimental results indicate that the proposed moments perform better in image reconstruction and pattern classification, and yield higher tolerance to image noise and smooth distortion in comparison with the existing circularly orthogonal moments.
The modified generic polar harmonic transforms for image representation This paper introduces four classes of orthogonal transforms by modifying the generic polar harmonic transforms. Then, the rotation invariant feature of the proposed transforms is investigated. Compared with the traditional generic polar harmonic transforms, the proposed transforms have the ability to describe the central region of the image with a parameter controlling the area of the region. Experimental results verified the image representation capability of the proposed transforms and showed better performance of the proposed transform in terms of rotation invariant pattern recognition.
Information hiding in medical images: a robust medical image watermarking system for E-healthcare Abstract Electronic transmission of the medical images is one of the primary requirements in a typical Electronic-Healthcare (E-Healthcare) system. However this transmission could be liable to hackers who may modify the whole medical image or only a part of it during transit. To guarantee the integrity of a medical image, digital watermarking is being used. This paper presents two different watermarking algorithms for medical images in transform domain. In first technique, a digital watermark and Electronic Patients Record (EPR) have been embedded in both regions; Region of Interest (ROI) and Region of Non-Interest (RONI). In second technique, Region of Interest (ROI) is kept untouched for tele-diagnosis purpose and Region of Non-Interest (RONI) is used to hide the digital watermark and EPR. In either algorithm 8 × 8 block based Discrete Cosine Transform (DCT) has been used. In each 8 × 8 block two DCT coefficients are selected and their magnitudes are compared for embedding the watermark/EPR. The selected coefficients are modified by using a threshold for embedding bit a ‘0’ or bit ‘1’ of the watermark/EPR. The proposed techniques have been found robust not only to singular attacks but also to hybrid attacks. Comparison results viz-a - viz payload and robustness show that the proposed techniques perform better than some existing state of art techniques. As such the proposed algorithms could be useful for e-healthcare systems.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems. The vehicular edge computing system integrates the computing resources of vehicles, and provides computing services for other vehicles and pedestrians with task offloading. However, the vehicular task offloading environment is dynamic and uncertain, with fast varying network topologies, wireless channel states, and computing workloads. These uncertainties bring extra challenges to task offloading. In this paper, we consider the task offloading among vehicles, and propose a solution that enables vehicles to learn the offloading delay performance of their neighboring vehicles while offloading computation tasks. We design an adaptive learning based task offloading (ALTO) algorithm based on the multi-armed bandit theory, in order to minimize the average offloading delay. ALTO works in a distributed manner without requiring frequent state exchange, and is augmented with input-awareness and occurrence-awareness to adapt to the dynamic environment. The proposed algorithm is proved to have a sublinear learning regret. Extensive simulations are carried out under both synthetic scenario and realistic highway scenario, and results illustrate that the proposed algorithm achieves low delay performance, and decreases the average delay up to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$30\%$</tex-math></inline-formula> compared with the existing upper confidence bound based learning algorithm.
NLTK: the natural language toolkit The Natural Language Toolkit is a suite of program modules, data sets, tutorials and exercises, covering symbolic and statistical natural language processing. NLTK is written in Python and distributed under the GPL open source license. Over the past three years, NLTK has become popular in teaching and research. We describe the toolkit and report on its current state of development.
Design and simulation of a joint-coupled orthosis for regulating FES-aided gait A hybrid functional electrical stimulation (FES)/orthosis system is being developed which combines two channels of (surface-electrode-based) electrical stimulation with a computer-controlled orthosis for the purpose of restoring gait to spinal cord injured (SCI) individuals (albeit with a stability aid, such as a walker). The orthosis is an energetically passive, controllable device which 1) unidirectionally couples hip to knee flexion; 2) aids hip and knee flexion with a spring assist; and 3) incorporates sensors and modulated friction brakes, which are used in conjunction with electrical stimulation for the feedback control of joint (and therefore limb) trajectories. This paper describes the hybrid FES approach and the design of the joint coupled orthosis. A dynamic simulation of an SCI individual using the hybrid approach is described, and results from the simulation are presented that indicate the promise of the JCO approach.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.11
0.1
0.1
0.1
0.1
0.1
0.0275
0
0
0
0
0
0
0
Mask2LFP: Mask-constrained Adversarial Latent Fingerprint Synthesis Latent fingerprints are one of the most valuable and unique biometric attributes that are extensively used in forensic and law enforcement applications. Compared to rolled/plain fingerprint, latent fingerprint is of poor quality in term of friction ridge patterns, hence a more challenging for automatic fingerprint recognition systems. Considering the difficulties of dusting, lifting, and recovery of latent fingerprint, this type of fingerprints remain expensive to develop and collect. In this paper, we present a novel approach for synthetic latent fingerprint generation using Generative Adversarial Network (GAN). Our proposed framework, named mask to latent fingerprint (Mask2LFP), uses binary mask of distorted fingerprint-like shapes as input, and outputs a realistic latent fingerprint. This work focuses on the generation of synthetic latent fingerprints. The aim is to alleviate the scarcity issue of latent fingerprint data and serve the increasing need for developing, evaluating, and enhancing fingerprint-based identification systems, especially in forensic applications.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Cooperative Traffic Signal Control with Traffic Flow Prediction in Multi-Intersection. As traffic congestion in cities becomes serious, intelligent traffic signal control has been actively studied. Deep Q-Network (DQN), a representative deep reinforcement learning algorithm, is applied to various domains from fully-observable game environment to traffic signal control. Due to the effective performance of DQN, deep reinforcement learning has improved speeds and various DQN extensions have been introduced. However, most traffic signal control researches were performed at a single intersection, and because of the use of virtual simulators, there are limitations that do not take into account variables that affect actual traffic conditions. In this paper, we propose a cooperative traffic signal control with traffic flow prediction (TFP-CTSC) for a multi-intersection. A traffic flow prediction model predicts future traffic state and considers the variables that affect actual traffic conditions. In addition, for cooperative traffic signal control in multi-intersection, each intersection is modeled as an agent, and each agent is trained to take best action by receiving traffic states from the road environment. To deal with multi-intersection efficiently, agents share their traffic information with other adjacent intersections. In the experiment, TFP-CTSC is compared with existing traffic signal control algorithms in a 4 x 4 intersection environment. We verify our traffic flow prediction and cooperative method.
Estimation of prediction error by using K-fold cross-validation Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K-fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training error has a downward bias and K-fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates. In this paper, we investigate two families that connect the training error and K-fold cross-validation.
Traffic Flow Forecasting for Urban Work Zones None of numerous existing traffic flow forecasting models focus on work zones. Work zone events create conditions that are different from both normal operating conditions and incident conditions. In this paper, four models were developed for forecasting traffic flow for planned work zone events. The four models are random forest, regression tree, multilayer feedforward neural network, and nonparametric regression. Both long-term and short-term traffic flow forecasting applications were investigated. Long-term forecast involves forecasting 24 h in advance using historical traffic data, and short-term forecasts involves forecasting 1 h and 45, 30, and 15 min in advance using real-time temporal and spatial traffic data. Models were evaluated using data from work zone events on two types of roadways, a freeway, i.e., I-270, and a signalized arterial, i.e., MO-141, in St. Louis, MO, USA. The results showed that the random forest model yielded the most accurate long-term and short-term work zone traffic flow forecasts. For freeway data, the most influential variables were the latest interval's look-back traffic flows at the upstream, downstream, and current locations. For arterial data, the most influential variables were the traffic flows from the three look-back intervals at the current location only.
Daily long-term traffic flow forecasting based on a deep neural network. •A new deep learning algorithm to predict daily long-term traffic flow data using contextual factors.•Deep neutral network to mine the relationship between traffic flow data and contextual factors.•Advanced batch training can effectively improve convergence of the training process.
A 3D CNN-LSTM-Based Image-to-Image Foreground Segmentation The video-based separation of foreground (FG) and background (BG) has been widely studied due to its vital role in many applications, including intelligent transportation and video surveillance. Most of the existing algorithms are based on traditional computer vision techniques that perform pixel-level processing assuming that FG and BG possess distinct visual characteristics. Recently, state-of-the-art solutions exploit deep learning models targeted originally for image classification. Major drawbacks of such a strategy are the lacking delineation of FG regions due to missing temporal information as they segment the FG based on a single frame object detection strategy. To grapple with this issue, we excogitate a 3D convolutional neural network (3D CNN) with long short-term memory (LSTM) pipelines that harness seminal ideas, viz., fully convolutional networking, 3D transpose convolution, and residual feature flows. Thence, an FG-BG segmenter is implemented in an encoder-decoder fashion and trained on representative FG-BG segments. The model devises a strategy called double encoding and slow decoding, which fuses the learned spatio-temporal cues with appropriate feature maps both in the down-sampling and up-sampling paths for achieving well generalized FG object representation. Finally, from the Sigmoid confidence map generated by the 3D CNN-LSTM model, the FG is identified automatically by using Nobuyuki Otsu’s method and an empirical global threshold. The analysis of experimental results via standard quantitative metrics on 16 benchmark datasets including both indoor and outdoor scenes validates that the proposed 3D CNN-LSTM achieves competitive performance in terms of figure of merit evaluated against prior and state-of-the-art methods. Besides, a failure analysis is conducted on 20 video sequences from the DAVIS 2016 dataset.
Improving Traffic Flow Prediction With Weather Information in Connected Cars: A Deep Learning Approach. Transportation systems might be heavily affected by factors such as accidents and weather. Specifically, inclement weather conditions may have a drastic impact on travel time and traffic flow. This study has two objectives: first, to investigate a correlation between weather parameters and traffic flow and, second, to improve traffic flow prediction by proposing a novel holistic architecture. It i...
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Well-Solvable Special Cases of the Traveling Salesman Problem: A Survey. The traveling salesman problem (TSP) belongs to the most basic, most important, and most investigated problems in combinatorial optimization. Although it is an ${\cal NP}$-hard problem, many of its special cases can be solved efficiently in polynomial time. We survey these special cases with emphasis on the results that have been obtained during the decade 1985--1995. This survey complements an earlier survey from 1985 compiled by Gilmore, Lawler, and Shmoys [The Traveling Salesman Problem---A Guided Tour of Combinatorial Optimization, Wiley, Chichester, pp. 87--143].
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
0
0
Requirements-driven Test Generation for Autonomous Vehicles with Machine Learning Components Autonomous vehicles are complex systems that are challenging to test and debug. A requirements-driven approach to the development process can decrease the resources required to design and test these systems, while simultaneously increasing the reliability. We present a testing framework that uses signal temporal logic (STL), which is a precise and unambiguous requirements language. Our framework e...
Adaptive generation of challenging scenarios for testing and evaluation of autonomous vehicles. •A novel framework for generating test cases for autonomous vehicles is proposed.•Adaptive sampling significantly reduces the number of simulations required.•Adjacency clustering identifies performance boundaries of the system.•Approach successfully applied to complex unmanned underwater vehicle missions.
Automatic Virtual Test Technology for Intelligent Driving Systems Considering Both Coverage and Efficiency The testing of the intelligent driving systems is faced with the challenges of efficiency because real traffic scenarios are infinite, uncontrollable and difficult to be precisely defined. Based on the complexity index of scenario that designed to measure the test effect indirectly, a new combinational testing algorithm of test cases generation is proposed to make a balance among multiple objects including test coverage, the number of test cases and test effect. Then a joint simulation platform based on Matlab, PreScan and Carsim is built up to realize the construction of 3D test environment, execution of test scenarios and evaluation of test results automatically and seamlessly. The strategy proposed in this paper is validated by applying it to a traffic jam pilot system. The result shows that the proposed strategy can improve the overall complexity of the designed test scenarios effectively, which can help us detect system faults faster and easier. And the time required to conduct tests is reduced obviously by means of automation.
Ontology-based methods for enhancing autonomous vehicle path planning We report the results of a first implementation demonstrating the use of an ontology to support reasoning about obstacles to improve the capabilities and performance of on-board route planning for autonomous vehicles. This is part of an overall effort to evaluate the performance of ontologies in different components of an autonomous vehicle within the 4D/RCS system architecture developed at NIST. Our initial focus has been on simple roadway driving scenarios where the controlled vehicle encounters potential obstacles in its path. As reported elsewhere [C. Schlenoff, S. Balakirsky, M. Uschold, R. Provine, S. Smith, Using ontologies to aid navigation planning in autonomous vehicles, Knowledge Engineering Review 18 (3) (2004) 243–255], our approach is to develop an ontology of objects in the environment, in conjunction with rules for estimating the damage that would be incurred by collisions with different objects in different situations. Automated reasoning is used to estimate collision damage; this information is fed to the route planner to help it decide whether to plan to avoid the object. We describe the results of the first implementation that integrates the ontology, the reasoner and the planner. We describe our insights and lessons learned and discuss resulting changes to our approach.
Integrated Simulation and Formal Verification of a Simple Autonomous Vehicle. This paper presents a proof-of-concept application of an approach to system development based on the integration of formal verification and co-simulation. A simple autonomous vehicle has the task of reaching an assigned straight path and then follow it, and it can be controlled by varying its turning speed. The correctness of the proposed control law has been formalized and verified by interactive theorem proving with the Prototype Verification System. Concurrently, the system has been co-simulated using the Prototype Verification System and the MathWorks Simulink tool: The vehicle kinematics have been simulated in Simulink, whereas the controller has been modeled in the logic language of the Prototype Verification System and simulated with the interpreter for the same language available in the theorem proving environment. With this approach, co-simulation and formal verification corroborate each other, thus strengthening developers' confidence in their analysis.
A Retargetable Fault Injection Framework for Safety Validation of Autonomous Vehicles Autonomous vehicles use Electronic Control Units running complex software to improve passenger comfort and safety. To test safety of in-vehicle electronics, the ISO 26262 standard on functional safety recommends using fault injection during component and system-level design. A Fault Injection Framework (FIF) induces hard-to-trigger hardware and software faults at runtime, enabling analysis of fault propagation effects. The growing number and complexity of diverse interacting components in vehicles demands a versatile FIF at the vehicle level. In this paper, we present a novel retargetable FIF based on debugger interfaces available on many target systems. We validated our FIF in three Hardware-In-the-Loop setups for autonomous driving based on the NXP BlueBox prototyping platform. To trigger a fault injection process, we developed an interactive user interface based on Robot Operating System, which also visualized vehicle system health. Our retargetable debugger-based fault injection mechanism confirmed safety properties and identified safety shortcomings of various automotive systems.
Test Scenario Generation and Optimization Technology for Intelligent Driving Systems In this paper, we propose a new scenario generation algorithm called Combinatorial Testing Based on Complexity (CTBC) based on both combinatorial testing (CT) method and Test Matrix (TM) technique for intelligent driving systems. To guide the generation procedure in the algorithm and evaluate the validity of the generated scenarios, we further propose a concept of complexity of test scenario. CTBC...
Reduction of Uncertainties for Safety Assessment of Automated Driving Under Parallel Simulations Many achievements concerning developments in the field of automated driving have been made. However, automated driving still faces the challenge of safety validation. Conventional methods are not suitable any more for this highly complex automation system. Therefore, the method named Virtual Assessment of Automation in Field Operation (VAAFO) is motivated. In this approach, automated driving system has no access to actuators but rather runs parallel to the human driver. Consequently, this approach is divided into two modules: online trajectory comparison and offline safety assessment. This paper focuses on the second module, in which uncertainties in world model are reduced and then the safety of Automated Vehicle (AV) is evaluated. Retrospective post-processing combined with Joint Integrated Probabilistic Data Association (JIPDA) tracker is put forward to reduce existence uncertainties. State uncertainties are reduced by an Unscented Rauch-Tung-Striebel smoother (URTSS). Furthermore, inverse TTC and remaining lateral distance are utilized to assess the safety of AV in the corrected world model. The results demonstrate that retrospective post-processing combined with JIPDA can reduce existence uncertainties greatly. URTSS is very useful for reducing state uncertainties. The studied case illustrates that the safety of AV can be assessed by parallel running and critical scenarios are found accordingly.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
A Survey on Transfer Learning A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.
The set cover with pairs problem We consider a generalization of the set cover problem, in which elements are covered by pairs of objects, and we are required to find a minimum cost subset of objects that induces a collection of pairs covering all elements. Formally, let U be a ground set of elements and let ${\cal S}$ be a set of objects, where each object i has a non-negative cost wi. For every $\{ i, j \} \subseteq {\cal S}$, let ${\cal C}(i,j)$ be the collection of elements in U covered by the pair { i, j }. The set cover with pairs problem asks to find a subset $A \subseteq {\cal S}$ such that $\bigcup_{ \{ i, j \} \subseteq A } {\cal C}(i,j) = U$ and such that ∑i∈Awi is minimized. In addition to studying this general problem, we are also concerned with developing polynomial time approximation algorithms for interesting special cases. The problems we consider in this framework arise in the context of domination in metric spaces and separation of point sets.
Telecommunications Power Plant Damage Assessment for Hurricane Katrina– Site Survey and Follow-Up Results This paper extends knowledge of disaster impact on the telecommunications power infrastructure by discussing the effects of Hurricane Katrina based on an on-site survey conducted in October 2005 and on public sources. It includes observations about power infrastructure damage in wire-line and wireless networks. In general, the impact on centralized network elements was more severe than on the distributed portion of the grids. The main cause of outage was lack of power due to fuel supply disruptions, flooding and security issues. This work also describes the means used to restore telecommunications services and proposes ways to improve logistics, such as coordinating portable generator set deployment among different network operators and reducing genset fuel consumption by installing permanent photovoltaic systems at sites where long electric outages are likely. One long term solution is to use of distributed generation. It also discusses the consequences on telecom power technology and practices since the storm.
When Vehicles See Pedestrians With Phones: A Multicue Framework for Recognizing Phone-Based Activities of Pedestrians The intelligent vehicle community has devoted considerable efforts to model driver behavior, and in particular, to detect and overcome driver distraction in an effort to reduce accidents caused by driver negligence. However, as the domain increasingly shifts toward autonomous and semiautonomous solutions, the driver is no longer integral to the decision-making process, indicating a need to refocus...
Pricing-Based Channel Selection for D2D Content Sharing in Dynamic Environment In order to make device-to-device (D2D) content sharing give full play to its advantage of improving local area services, one of the important issues is to decide the channels that D2D pairs occupy. Most existing works study this issue in static environment, and ignore the guidance for D2D pairs to select the channel adaptively. In this paper, we investigate this issue in dynamic environment where...
1.11
0.11
0.11
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
Approximation algorithms for distance constrained vehicle routing problems We study the distance constrained vehicle routing problem (DVRP) (Laporte et al., Networks 14 (1984), 47–61, Li et al., Oper Res 40 (1992), 790–799): given a set of vertices in a metric space, a specified depot, and a distance bound D, find a minimum cardinality set of tours originating at the depot that covers all vertices, such that each tour has length at most D. This problem is NP-complete, even when the underlying metric is induced by a weighted star. Our main result is a 2-approximation algorithm for DVRP on tree metrics; we also show that no approximation factor better than 1.5 is possible unless P = NP. For the problem on general metrics, we present a $(O(\log {1 \over \varepsilon }),1 + \varepsilon )$ **image** -bicriteria approximation algorithm: i.e., for any ε 0, it obtains a solution violating the length bound by a 1 + ε factor while using at most $O(\log {1 \over \varepsilon })$ **image** times the optimal number of vehicles. © 2011 Wiley Periodicals, Inc. NETWORKS, 2012 © 2012 Wiley Periodicals, Inc.
Touring a sequence of polygons Given a sequence of k polygons in the plane, a start point s, and a target point, t, we seek a shortest path that starts at s, visits in order each of the polygons, and ends at t. If the polygons are disjoint and convex, we give an algorithm running in time O(kn log (n/k)), where n is the total number of vertices specifying the polygons. We also extend our results to a case in which the convex polygons are arbitrarily intersecting and the subpath between any two consecutive polygons is constrained to lie within a simply connected region; the algorithm uses O(nk2 log n) time. Our methods are simple and allow shortest path queries from s to a query point t to be answered in time O(k log n + m), where m is the combinatorial path length. We show that for nonconvex polygons this "touring polygons" problem is NP-hard.The touring polygons problem is a strict generalization of some classic problems in computational geometry, including the safari problem, the zoo-keeper problem, and the watchman route problem in a simple polygon. Our new results give an order of magnitude improvement in the running times of the safari problem and the watchman route problem: We solve the safari problem in O(n2 log n) time and the watchman route problem (through a fixed point s) in time O(n3 log n), compared with the previous time bounds of O(n3) and O(n4), respectively.
Numerical Comparison of Some Penalty-Based Constraint Handling Techniques in Genetic Algorithms We study five penalty function-based constraint handling techniques to be used with genetic algorithms in global optimization. Three of them, the method of superiority of feasible points, the method of parameter free penalties and the method of adaptive penalties have already been considered in the literature. In addition, we introduce two new modifications of these methods. We compare all the five methods numerically in 33 test problems and report and analyze the results obtained in terms of accuracy, efficiency and reliability. The method of adaptive penalties turned out to be most efficient while the method of parameter free penalties was the most reliable.
Well-Solvable Special Cases of the Traveling Salesman Problem: A Survey. The traveling salesman problem (TSP) belongs to the most basic, most important, and most investigated problems in combinatorial optimization. Although it is an ${\cal NP}$-hard problem, many of its special cases can be solved efficiently in polynomial time. We survey these special cases with emphasis on the results that have been obtained during the decade 1985--1995. This survey complements an earlier survey from 1985 compiled by Gilmore, Lawler, and Shmoys [The Traveling Salesman Problem---A Guided Tour of Combinatorial Optimization, Wiley, Chichester, pp. 87--143].
Rich Vehicle Routing Problem: Survey The Vehicle Routing Problem (VRP) is a well-known research line in the optimization research community. Its different basic variants have been widely explored in the literature. Even though it has been studied for years, the research around it is still very active. The new tendency is mainly focused on applying this study case to real-life problems. Due to this trend, the Rich VRP arises: combining multiple constraints for tackling realistic problems. Nowadays, some studies have considered specific combinations of real-life constraints to define the emerging Rich VRP scopes. This work surveys the state of the art in the field, summarizing problem combinations, constraints defined, and approaches found.
Efficient Boustrophedon Multi-Robot Coverage: an algorithmic approach This paper presents algorithmic solutions for the complete coverage path planning problem using a team of mobile robots. Multiple robots decrease the time to complete the coverage, but maximal efficiency is only achieved if the number of regions covered multiple times is minimized. A set of multi-robot coverage algorithms is presented that minimize repeat coverage. The algorithms use the same planar cell-based decomposition as the Boustrophedon single robot coverage algorithm, but provide extensions to handle how robots cover a single cell, and how robots are allocated among cells. Specifically, for the coverage task our choice of multi-robot policy strongly depends on the type of communication that exists between the robots. When the robots operate under the line-of-sight communication restriction, keeping them as a team helps to minimize repeat coverage. When communication between the robots is available without any restrictions, the robots are initially distributed through space, and each one is allocated a virtually-bounded area to cover. A greedy auction mechanism is used for task/cell allocation among the robots. Experimental results from different simulated and real environments that illustrate our approach for different communication conditions are presented.
Reliable Path Planning for Drone Delivery Using a Stochastic Time-Dependent Public Transportation Network Drones have been regarded as a promising means for future delivery industry by many logistics companies. Several drone-based delivery systems have been proposed but they generally have a drawback in delivering customers locating far from warehouses. This paper proposes an alternative system based on a public transportation network. This system has the merit of enlarging the delivery range. As the public transportation network is actually a stochastic time-dependent network, we focus on the reliable drone path planning problem (RDPP). We present a stochastic model to characterize the path traversal time and develop a label setting algorithm to construct the reliable drone path. Furthermore, we consider the limited battery lifetime of the drone to determine whether a path is feasible, and we account this as a constraint in the optimization model. To accommodate the feasibility, the developed label setting algorithm is extended by adding a simple operation. The complexity of the developed algorithm is analyzed and how it works is demonstrated via a case study.
An efficient partial charging scheme using multiple mobile chargers in wireless rechargeable sensor networks The advent of mobile charging with wireless energy transfer technology has perpetuated the omnipresent wireless rechargeable sensor networks. The existing literature finds that on-demand recharging of the sensor nodes (SNs) can significantly improve the charging performance while using multiple charging vehicles (MCVs) and multi-node charging model. Most of the existing schemes ignore the heterogeneous rate of SNs’ energy consumption, partial charging of the SNs, and joint optimization of multiple network attributes and thus these schemes are deprived from the benefits of prolonging network lifetime to its maximum extent. To this end, this paper addresses all the aforementioned issues and proposes an on-demand multi-node charging scheme for the SNs following a partial charging model. The working of the proposed scheme is twofold. First, charging schedules of the MCVs are generated through optimal halting points by integrating non-dominated sorting genetic algorithm (NSGA-II) and multi-attribute decision making (MADM) approach. Then the charging time at each halting point is decided for the SNs with the help of a partial charging timer. We carry out extensive simulations on the proposed scheme and the results are compared with some existing schemes using various performance metrics. The results confirm the superiority of the proposed scheme over the existing ones.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading in Wireless Sensor Networks In this paper, a three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer. The framework employs distributed load balanced clustering and dual data uploading, which is referred to as LBC-DDU. The objective is to achieve good scalability, long network lifetime and low data collection latency. At the sensor layer, a distributed load balanced clustering (LBC) algorithm is proposed for sensors to self-organize themselves into clusters. In contrast to existing clustering methods, our scheme generates multiple cluster heads in each cluster to balance the work load and facilitate dual data uploading. At the cluster head layer, the inter-cluster transmission range is carefully chosen to guarantee the connectivity among the clusters. Multiple cluster heads within a cluster cooperate with each other to perform energy-saving inter-cluster communications. Through inter-cluster transmissions, cluster head information is forwarded to SenCar for its moving trajectory planning. At the mobile collector layer, SenCar is equipped with two antennas, which enables two cluster heads to simultaneously upload data to SenCar in each time by utilizing multi-user multiple-input and multiple-output (MU-MIMO) technique. The trajectory planning for SenCar is optimized to fully utilize dual data uploading capability by properly selecting polling points in each cluster. By visiting each selected polling point, SenCar can efficiently gather data from cluster heads and transport the data to the static data sink. Extensive simulations are conducted to evaluate the effectiveness of the proposed LBC-DDU scheme. The results show that when each cluster has at most two cluster heads, LBC-DDU achieves over 50 percent energy saving per node and 60 percent energy saving on cluster heads comparing with data collection through multi-hop relay to the static data sink, and 20 percent - horter data collection time compared to traditional mobile data gathering.
A survey of socially interactive robots This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Finite State Control of FES-Assisted Walking with Spring Brake Orthosis This paper presents finite state control (FSC) of paraplegic walking with wheel walker using functional electrical stimulation (FES) with spring brake orthosis (SBO). The work is a first effort towards restoring natural like swing phase in paraplegic gait through a new hybrid orthosis, referred to as spring brake orthosis (SBO). This mechanism simplifies the control task and results in smooth motion and more-natural like trajectory produced by the flexion reflex for gait in spinal cord injured subjects. The study is carried out with a model of humanoid with wheel walker using the Visual Nastran (Vn4D) dynamic simulation software. Stimulated muscle model of quadriceps is developed for knee extension. Fuzzy logic control (FLC) is developed in Matlab/Simulink to regulate the muscle stimulation pulse-width required to drive FES-assisted walking gait and the computed motion is visualised in graphic animation from Vn4D and finite state control is used to control the transaction between all walking states. Finite state control (FSC) is used to control the switching of brakes, FES and spring during walking cycle.
Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction. Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from large-scale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-of-the-art methods.
STNReID: Deep Convolutional Networks With Pairwise Spatial Transformer Networks for Partial Person Re-Identification Partial person re-identification (ReID) is a challenging task because only partial information of person images is available for matching target persons. Few studies, especially on deep learning, have focused on matching partial person images with holistic person images. This study presents a novel deep partial ReID framework based on pairwise spatial transformer networks (STNReID), which can be trained on existing holistic person datasets. STNReID includes a spatial transformer network (STN) module and a ReID module. The STN module samples an affined image (a semantically corresponding patch) from the holistic image to match the partial image. The ReID module extracts the features of the holistic, partial, and affined images. Competition (or confrontation) is observed between the STN module and the ReID module, and two-stage training is applied to acquire a strong STNReID for partial ReID. Experimental results show that our STNReID obtains 66.7% and 54.6% rank-1 accuracies on Partial-ReID and Partial-iLIDS datasets, respectively. These values are at par with those obtained with state-of-the-art methods.
1.05221
0.05
0.05
0.05
0.05
0.025
0.008333
0.002828
0.001293
0
0
0
0
0
An improved method for sink node deployment in wireless sensor network to big data Wireless sensor network (WSNs) technology and Internet technology penetrate and extend each other. It is a good way for physical changes of objects, state recognition and data collection, and becomes an important source of network data in big data. Compared with traditional wireless networks, WSNs have the characteristics of integrating sensing, processing, and transmission, limited hardware resources, limited power supply capacity, no center, self-organization, multi-hop routing, dynamic topology, large number of nodes, and dense distribution. In order to improve the energy utilization rate of a single node to a greater extent, reduce the energy consumption of the entire WSNs, and extend the life cycle of WSNs, high-efficiency networking is essential in the application of WSNs. Networking is one of the foundations to the large-scale WSNs. The network model and node location deployment are important technologies for WSNs networking. Based on the network characteristics of large-scale WSNs and the transmission capacity of big data, a new type of network model suitable is presented which combined the advantages of Star model and Mesh model. More importantly, the deployment environment of sensor nodes is a spatial network. The data collected and transmitted by large-scale WSNs is very large. The deployment of sensor nodes in space can ensure that the big data collected and transmitted are true and effective. This research proposes the space density first (SDF) algorithm, which improves the neighbor density first algorithm with the space node deployment and the density-optimized SDF algorithm. The SDF algorithm saves network energy and extends the life of the network. Experimental results show that large-scale WSNs built with a new networking model and SDF algorithm can collect and transmit big data stably and reliably, which saves network energy and improves the accuracy of big data.
Research on enterprise knowledge service based on semantic reasoning and data fusion In the era of big data, the field of enterprise risk is facing considerable challenges brought by massive multisource heterogeneous information sources. In view of the proliferation of multisource and heterogeneous enterprise risk information, insufficient knowledge fusion capabilities, and the low level of intelligence in risk management, this article explores the application process of enterprise knowledge service models for rapid responses to risk incidents from the perspective of semantic reasoning and data fusion and clarifies the elements of the knowledge service model in the field of risk management. Based on risk data, risk decision making as the standard, risk events as the driving force, and knowledge graph analysis methods as the power, the risk domain knowledge service process is decomposed into three stages: prewarning, in-event response, and postevent summary. These stages are combined with the empirical knowledge of risk event handling to construct a three-level knowledge service model of risk domain knowledge acquisition-organization-application. This model introduces the semantic reasoning and data fusion method to express, organize, and integrate the knowledge needs of different stages of risk events; provide enterprise managers with risk management knowledge service solutions; and provide new growth points for the innovation of interdisciplinary knowledge service theory.
Efficiency evaluation research of a regional water system based on a game cross-efficiency model To solve the problem of regional water system evaluation, this paper proposes a system efficiency evaluation method based on the game cross-efficiency model and conducts an empirical analysis. First, autopoiesis is introduced as the theoretical basis. The characteristics of the authigenic system are combined with a regional water system, and the connotation and characteristics of the regional water system are defined. Second, based on the competitive relationship between regional water systems, the existing game crossover efficiency model is improved. A crossover efficiency model of other games is proposed to evaluate the efficiency of regional water systems. Then, the Pearl River Delta urban agglomeration is selected as the research object. The effects of four systematic evaluation methods based on the DEA method are compared horizontally to find the optimal system efficiency evaluation method. Finally, the characteristics of the regional water system in the Pearl River Delta are systematically analysed through the evaluation results, and the present situation of the regional water system is fully explained.
Diagnosis and classification prediction model of pituitary tumor based on machine learning In order to improve the diagnosis and classification effect of pituitary tumors, this paper combines the current common machine learning methods and classification prediction methods to improve the traditional machine learning algorithms. Moreover, this paper analyzes the feature algorithm based on the feature extraction requirements of pituitary tumor pictures and compares a variety of commonly used algorithms to select a classification algorithm suitable for the model of this paper through comparison methods. In addition, this paper carries out the calculation of the prediction algorithm and verifies the algorithm according to the actual situation. Finally, based on the neural network algorithm, this paper designs and constructs the algorithm function module and combines the actual needs of pituitary tumors to build the model and verify the performance of the model. The research results show that the model system constructed in this paper meets the expected demand.
Research on the improvement effect of machine learning and neural network algorithms on the prediction of learning achievement In order to improve the effect of college student performance prediction, based on machine learning and neural network algorithms, this paper improves the traditional data processing algorithms and proposes a similarity calculation method for courses. Moreover, this paper uses cosine similarity to calculate the similarity of courses. Simultaneously, this paper proposes an improved hybrid multi-weight improvement algorithm to improve the cold start problem that cannot be solved by traditional algorithms. In addition, this paper combines the neural network structure to construct a model framework structure, sets the functional modules according to actual needs, and analyzes and predicts students' personal performance through student portraits. Finally, this paper designs experiments to analyze the effectiveness of the model proposed in this paper. From the experimental data, it can be seen that the model proposed in this paper basically meets the expected requirements.
Edge computing clone node recognition system based on machine learning Edge computing is an important cornerstone for the construction of 5G networks, but with the development of Internet technology, the computer nodes are extremely vulnerable in attacks, especially clone attacks, causing casualties. The principle of clonal node attack is that the attacker captures the legitimate nodes in the network and obtains all their legitimate information, copies several nodes with the same ID and key information, and puts these clonal nodes in different locations in the network to attack the edge computing devices, resulting in network paralysis. How to quickly and efficiently identify clone nodes and isolate them becomes the key to prevent clone node attacks and improve the security of edge computing. In order to improve the degree of protection of edge computing and identify clonal nodes more quickly and accurately, based on edge computing of machine learning, this paper uses case analysis method, the literature analysis method, and other methods to collect data from the database, and uses parallel algorithm to build a model of clonal node recognition. The results show that the edge computing based on machine learning can greatly improve the efficiency of clonal node recognition, the recognition speed is more than 30% faster than the traditional edge computing, and the recognition accuracy reaches 0.852, which is about 50% higher than the traditional recognition. The results show that the edge computing clonal node method based on machine learning can improve the detection success rate of clonal nodes and reduce the energy consumption and transmission overhead of nodes, which is of great significance to the detection of clonal nodes.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Well-Solvable Special Cases of the Traveling Salesman Problem: A Survey. The traveling salesman problem (TSP) belongs to the most basic, most important, and most investigated problems in combinatorial optimization. Although it is an ${\cal NP}$-hard problem, many of its special cases can be solved efficiently in polynomial time. We survey these special cases with emphasis on the results that have been obtained during the decade 1985--1995. This survey complements an earlier survey from 1985 compiled by Gilmore, Lawler, and Shmoys [The Traveling Salesman Problem---A Guided Tour of Combinatorial Optimization, Wiley, Chichester, pp. 87--143].
A competitive swarm optimizer for large scale optimization. In this paper, a novel competitive swarm optimizer (CSO) for large scale optimization is proposed. The algorithm is fundamentally inspired by the particle swarm optimization but is conceptually very different. In the proposed CSO, neither the personal best position of each particle nor the global best position (or neighborhood best positions) is involved in updating the particles. Instead, a pairwise competition mechanism is introduced, where the particle that loses the competition will update its position by learning from the winner. To understand the search behavior of the proposed CSO, a theoretical proof of convergence is provided, together with empirical analysis of its exploration and exploitation abilities showing that the proposed CSO achieves a good balance between exploration and exploitation. Despite its algorithmic simplicity, our empirical results demonstrate that the proposed CSO exhibits a better overall performance than five state-of-the-art metaheuristic algorithms on a set of widely used large scale optimization problems and is able to effectively solve problems of dimensionality up to 5000.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Communication-Efficient Federated Learning Over MIMO Multiple Access Channels Communication efficiency is of importance for wireless federated learning systems. In this paper, we propose a communication-efficient strategy for federated learning over multiple-input multiple-output (MIMO) multiple access channels (MACs). The proposed strategy comprises two components. When sending a locally computed gradient, each device compresses a high dimensional local gradient to multiple lower-dimensional gradient vectors using block sparsification. When receiving a superposition of the compressed local gradients via a MIMO-MAC, a parameter server (PS) performs a joint MIMO detection and the sparse local-gradient recovery. Inspired by the turbo decoding principle, our joint detection-and-recovery algorithm accurately recovers the high-dimensional local gradients by iteratively exchanging their beliefs for MIMO detection and sparse local gradient recovery outputs. We then analyze the reconstruction error of the proposed algorithm and its impact on the convergence rate of federated learning. From simulations, our gradient compression and joint detection-and-recovery methods diminish the communication cost significantly while achieving identical classification accuracy for the case without any compression.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Incorporating Human Domain Knowledge in 3D LiDAR-based Semantic Segmentation. This article studies semantic segmentation using 3D LiDAR data. Popular deep learning methods applied for this task require a large number of manual annotations to train the parameters. We propose a new method that makes full use of the advantages of traditional methods and deep learning methods via incorporating human domain knowledge into the neural network model to reduce the demand for large n...
Visual Human–Computer Interactions for Intelligent Vehicles and Intelligent Transportation Systems: The State of the Art and Future Directions Research on intelligent vehicles has been popular in the past decade. To fill the gap between automatic approaches and man-machine control systems, it is indispensable to integrate visual human-computer interactions (VHCIs) into intelligent vehicles systems. In this article, we review existing studies on VHCI in intelligent vehicles from three aspects: 1) visual intelligence; 2) decision making; and 3) macro deployment. We discuss how VHCI evolves in intelligent vehicles and how it enhances the capability of intelligent vehicles. We present several simulated scenarios and cases for future intelligent transportation system.
Automated Vehicles Sharing the Road: Surveying Detection and Localization of Pedalcyclists Automated Vehicles (AVs) must comply with traffic laws, including those requiring motorists to maintain safe distances when passing pedalcyclists. We review relevant U.S. legislation, statistics of traffic accidents in the U.S. resulting in pedalcyclists fatalities, and present “what if” scenarios for AV and pedalcyclist interactions to illustrate safety-preserving algorithms’ necessary ethical co...
Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS We describe a real-time, low-cost system to localize a mobile robot in outdoor environments. Our system relies on stereo vision to robustly estimate frame-to-frame motion in real time (also known as visual odometry). The motion estimation problem is formulated efficiently in the disparity space and results in accurate and robust estimates of the motion even for a small-baseline configuration. Our system uses inertial measurements to fill in motion estimates when visual odometry fails. This incremental motion is then fused with a low-cost GPS sensor using a Kalman Filter to prevent long-term drifts. Experimental results are presented for outdoor localization in moderately sized environments (\geqslant 100 meters)
Environment-aware Development of Robust Vision-based Cooperative Perception Systems Autonomous vehicles need a complete and robust perception of their environment to correctly understand the surrounding traffic scene and come to the right decisions. Making use of vehicle-to-vehicle (V2V) communication can improve the perception capabilities of autonomous vehicles by extending the range of their own local sensors. For the development of robust cooperative perception systems it is necessary to include varying environmental conditions to the scenarios used for validation. In this paper we present a new approach to investigate a cooperative perception pipeline within simulation under varying rain conditions. We demonstrate our approach on the example of a complete vision-based cooperative perception pipeline. Scenarios with a varying number of cooperative vehicles under different synthetically generated rain variations are used to show the influence of rain on local and cooperative perception.
MFR-CNN: Incorporating Multi-Scale Features and Global Information for Traffic Object Detection. Object detection plays an important role in intelligent transportation systems and intelligent vehicles. Although the topic of object detection has been studied for decades, it is still challenging to accurately detect objects under complex scenarios. The contributing factors for challenges include diversified object and background appearance, motion blur, adverse weather conditions, and complex i...
Efficient Ladder-Style DenseNets for Semantic Segmentation of Large Images Recent progress of deep image classification models provides great potential for improving related computer vision tasks. However, the transition to semantic segmentation is hampered by strict memory limitations of contemporary GPUs. The extent of feature map caching required by convolutional backprop poses significant challenges even for moderately sized Pascal images, while requiring careful architectural considerations when input resolution is in the megapixel range. To address these concerns, we propose a novel ladder-style DenseNet-based architecture which features high modelling power, efficient upsampling, and inherent spatial efficiency which we unlock with checkpointing. The resulting models deliver high performance and allow training at megapixel resolution on commodity hardware. The presented experimental results outperform the state-of-the-art in terms of prediction accuracy and execution speed on Cityscapes, VOC 2012, CamVid and ROB 2018 datasets. Source code at https://github.com/ivankreso/LDN.
A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects A convolutional neural network (CNN) is one of the most significant networks in the deep learning field. Since CNN made impressive achievements in many areas, including but not limited to computer vision and natural language processing, it attracted much attention from both industry and academia in the past few years. The existing reviews mainly focus on CNN’s applications in different scenarios without considering CNN from a general perspective, and some novel ideas proposed recently are not covered. In this review, we aim to provide some novel ideas and prospects in this fast-growing field. Besides, not only 2-D convolution but also 1-D and multidimensional ones are involved. First, this review introduces the history of CNN. Second, we provide an overview of various convolutions. Third, some classic and advanced CNN models are introduced; especially those key points making them reach state-of-the-art results. Fourth, through experimental analysis, we draw some conclusions and provide several rules of thumb for functions and hyperparameter selection. Fifth, the applications of 1-D, 2-D, and multidimensional convolution are covered. Finally, some open issues and promising directions for CNN are discussed as guidelines for future work.
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation In this paper we study the convergence of the Galerkin approximation method applied to the generalized Hamilton-Jacobi-Bellman (GHJB) equation over a compact set containing the origin. The GHJB equation gives the cost of an arbitrary control law and can be used to improve the performance of this control. The GHJB equation can also be used to successively approximate the Hamilton-Jacobi-Bellman equation. We state sufficient conditions that guarantee that the Galerkin approximation converges to the solution of the GHJB equation and that the resulting approximate control is stabilizing on the same region as the initial control. The method is demonstrated on a simple nonlinear system and is compared to a result obtained by using exact feedback linearization in conjunction with the LQR design method. (C) 1997 Elsevier Science Ltd. All rights reserved.
Combining Global and Local Surrogate Models to Accelerate Evolutionary Optimization In this paper, we present a novel surrogate-assisted evolutionary optimization framework for solving computationally expensive problems. The proposed framework uses computationally cheap hierarchical surrogate models constructed through online learning to replace the exact computationally expensive objective functions during evolutionary search. At the first level, the framework employs a data-parallel Gaussian process based global surrogate model to filter the evolutionary algorithm (EA) population of promising individuals. Subsequently, these potential individuals undergo a memetic search in the form of Lamarckian learning at the second level. The Lamarckian evolution involves a trust-region enabled gradient-based search strategy that employs radial basis function local surrogate models to accelerate convergence. Numerical results are presented on a series of benchmark test functions and on an aerodynamic shape design problem. The results obtained suggest that the proposed optimization framework converges to good designs on a limited computational budget. Furthermore, it is shown that the new algorithm gives significant savings in computational cost when compared to the traditional evolutionary algorithm and other surrogate assisted optimization frameworks
Interactive display robot: Projector robot with natural user interface Combining a hand-held small projector, a mobile robot, a RGB-D sensor and a pan/tilt device, interactive displaying robot can move freely in the indoor space and display on any surface. In addition, the user can manipulate the projector robot and projection direction through the natural user interface.
Ear recognition: More than a survey. Automatic identity recognition from ear images represents an active field of research within the biometric community. The ability to capture ear images from a distance and in a covert manner makes the technology an appealing choice for surveillance and security applications as well as other application domains. Significant contributions have been made in the field over recent years, but open research problems still remain and hinder a wider (commercial) deployment of the technology. This paper presents an overview of the field of automatic ear recognition (from 2D images) and focuses specifically on the most recent, descriptor-based methods proposed in this area. Open challenges are discussed and potential research directions are outlined with the goal of providing the reader with a point of reference for issues worth examining in the future. In addition to a comprehensive review on ear recognition technology, the paper also introduces a new, fully unconstrained dataset of ear images gathered from the web and a toolbox implementing several state-of-the-art techniques for ear recognition. The dataset and toolbox are meant to address some of the open issues in the field and are made publicly available to the research community.
Inferring Latent Traffic Demand Offered To An Overloaded Link With Modeling Qos-Degradation Effect In this paper, we propose a CTRIL (Common Trend and Regression with Independent Loss) model to infer latent traffic demand in overloaded links as well as how much it is reduced due to QoS (Quality of Service) degradation. To appropriately provision link bandwidth for such overloaded links, we need to infer how much traffic will increase without QoS degradation. Because original latent traffic demand cannot be observed, we propose a method that compares the other traffic time series of an underloaded link, and by assuming that the latent traffic demands in both overloaded and underloaded are common, and actualized traffic demand in the overloaded link is decreased from common pattern due to the effect of QoS degradation. To realize the method, we developed a CTRIL model on the basis of a state-space model where observed traffic is generated from a latent trend but is decreased by the QoS degradation. By applying the CTRIL model to actual HTTP (Hypertext transfer protocol) traffic and QoS time series data, we reveal that 1% packet loss decreases traffic demand by 12.3%, and the estimated latent traffic demand is larger than the observed one by 23.0%.
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
Supporting Precise Manual-handling Task using Visuo-haptic Interaction. Precise manual handling skills are necessary to create art and to paint models. However, these skills are difficult to learn. Some research has approached this issue using mechanical devices. However, mechanical systems have high costs and limit the user's degree-of-freedom. In our research, we propose a system using visuo-haptics to support accurate work without using any mechanical devices. We considered the principle that when a visuo-haptic force is generated on a user's hand in the opposite direction of a target path, the user moves her/his hand to the right direction reflexively to repel the force. Based on this idea, we created a system that can modify users' hand movement by showing a dummy hand using a mixed reality display, which supports precise manual-handling tasks. To demonstrate this, we performed experiments conducted with a video see-through system that uses a head mounted display (HMD). The results showed that an expansion of the deviation between the target route and the actual hand position improved accuracy up to 50%. We also saw a tendency for a lager expansion to give the most improvement in quality, but slow down working speed at the same time. According to experimental results, we find that a gain of about 2.5 gives an ideal balance between the working precision and the drawing speed.
A Case Study on Virtual Reality American Football Training. We present a study of American football training through the use of virtual reality. We developed a proprietary training software SIDEKIQ designed for professional training of student athletes in an immersive virtual reality environment, where trainees experience the football gameplays created by their coaches on desktop PCs, Oculus Rift or even CAVE-like facility. A user evaluation was conducted to quantify the effectiveness of the VR training over a 3-day training session. The result showed on average 30% overall improvement in the scores collected from the assessment.
Immersive virtual reality to enhance the spatial awareness of students This paper presents a study on the effectiveness of virtual reality as a medium to enhance the spatial awareness and interest of students in the subject of history in rural Indian schools. Students of rural schools of the Indian village of Kandi in Telangana were provided with a Virtual Reality solution which helped them to view a remote historical place in full immersion. The historical site which was chosen for this project was the Golconda Fort. It was in the same state as that of the school, yet most of the students had not visited the place. Each student was given a 15 minutes session with the Virtual Reality module. In parallel, another set of students were taught about the fort in regular teaching methods. The two set of students were then given a written objective exam to analyze their learning. The two sets were then reversed and combined by giving regular teaching method to the students who used virtual reality previously and vice versa. These students were then given the test and the results were analyzed. It was found that spatial awareness including perception of colors, direction and size increased in the Virtual Reality based system. Factual data was more accurately interpreted when students were provided the information through regular teaching methods.
Educational virtual environments: A ten-year review of empirical research (1999-2009) This study is a ten-year critical review of empirical research on the educational applications of Virtual Reality (VR). Results show that although the majority of the 53 reviewed articles refer to science and mathematics, researchers from social sciences also seem to appreciate the educational value of VR and incorporate their learning goals in Educational Virtual Environments (EVEs). Although VR supports multisensory interaction channels, visual representations predominate. Few are the studies that incorporate intuitive interactivity, indicating a research trend in this direction. Few are the settings that use immersive EVEs reporting positive results on users' attitudes and learning outcomes, indicating that there is a need for further research on the capabilities of such systems. Features of VR that contribute to learning such as first order experiences, natural semantics, size, transduction, reification, autonomy and presence are exploited according to the educational context and content. Presence seems to play an important role in learning and it is a subject needing further and intensive studies. Constructivism seems to be the theoretical model the majority of the EVEs are based on. The studies present real world, authentic tasks that enable context and content dependent knowledge construction. They also provide multiple representations of reality by representing the natural complexity of the world. Findings show that collaboration and social negotiation are not only limited to the participants of an EVE, but exist between participants and avatars, offering a new dimension to computer assisted learning. Little can yet be concluded regarding the retention of the knowledge acquired in EVEs. Longitudinal studies are necessary, and we believe that the main outcome of this study is the future research perspectives it brings to light.
A grounded investigation of game immersion The term immersion is widely used to describe games but it is not clear what immersion is or indeed if people are using the same word consistently. This paper describes work done to define immersion based on the experiences of gamers. Grounded Theory is used to construct a robust division of immersion into the three levels: engagement, engrossment and total immersion. This division alone suggests new lines for investigating immersion and transferring it into software domains other than games.
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Revenue-optimal task scheduling and resource management for IoT batch jobs in mobile edge computing With the growing prevalence of Internet of Things (IoT) devices and technology, a burgeoning computing paradigm namely mobile edge computing (MEC) is delicately proposed and designed to accommodate the application requirements of IoT scenario. In this paper, we focus on the problems of dynamic task scheduling and resource management in MEC environment, with the specific objective of achieving the optimal revenue earned by edge service providers. While the majority of task scheduling and resource management algorithms are formulated by an integer programming (IP) problem and solved in a dispreferred NP-hard manner, we innovatively investigate the problem structure and identify a favorable property namely totally unimodular constraints. The totally unimodular property further helps to design an equivalent linear programming (LP) problem which can be efficiently and elegantly solved at polynomial computational complexity. In order to evaluate our proposed approach, we conduct simulations based on real-life IoT dataset to verify the effectiveness and efficiency of our approach.
Pors: proofs of retrievability for large files In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Biologically-inspired soft exosuit. In this paper, we present the design and evaluation of a novel soft cable-driven exosuit that can apply forces to the body to assist walking. Unlike traditional exoskeletons which contain rigid framing elements, the soft exosuit is worn like clothing, yet can generate moments at the ankle and hip with magnitudes of 18% and 30% of those naturally generated by the body during walking, respectively. Our design uses geared motors to pull on Bowden cables connected to the suit near the ankle. The suit has the advantages over a traditional exoskeleton in that the wearer's joints are unconstrained by external rigid structures, and the worn part of the suit is extremely light, which minimizes the suit's unintentional interference with the body's natural biomechanics. However, a soft suit presents challenges related to actuation force transfer and control, since the body is compliant and cannot support large pressures comfortably. We discuss the design of the suit and actuation system, including principles by which soft suits can transfer force to the body effectively and the biological inspiration for the design. For a soft exosuit, an important design parameter is the combined effective stiffness of the suit and its interface to the wearer. We characterize the exosuit's effective stiffness, and present preliminary results from it generating assistive torques to a subject during walking. We envision such an exosuit having broad applicability for assisting healthy individuals as well as those with muscle weakness.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.202898
0.202898
0.202898
0.051201
0.000536
0.000179
0
0
0
0
0
0
0
0
A Functional Data Analysis Approach to Traffic Volume Forecasting. Traffic volume forecasts are used by many transportation analysis and management systems to better characterize and react to fluctuating traffic patterns. Most current forecasting methods do not take advantage of the underlying functional characteristics of the time series to make predictions. This paper presents a methodology that uses functional principal components analysis to create high-quali...
Forecasting holiday daily tourist flow based on seasonal support vector regression with adaptive genetic algorithm. •The model of support vector regression with adaptive genetic algorithm and the seasonal mechanism is proposed.•Parameters selection and seasonal adjustment should be carefully selected.•We focus on latest and representative holiday daily data in China.•Two experiments are used to prove the effect of the model.•The AGASSVR is superior to AGA-SVR and BPNN.
Regression conformal prediction with random forests Regression conformal prediction produces prediction intervals that are valid, i.e., the probability of excluding the correct target value is bounded by a predefined confidence level. The most important criterion when comparing conformal regressors is efficiency; the prediction intervals should be as tight (informative) as possible. In this study, the use of random forests as the underlying model for regression conformal prediction is investigated and compared to existing state-of-the-art techniques, which are based on neural networks and k-nearest neighbors. In addition to their robust predictive performance, random forests allow for determining the size of the prediction intervals by using out-of-bag estimates instead of requiring a separate calibration set. An extensive empirical investigation, using 33 publicly available data sets, was undertaken to compare the use of random forests to existing state-of-the-art conformal predictors. The results show that the suggested approach, on almost all confidence levels and using both standard and normalized nonconformity functions, produced significantly more efficient conformal predictors than the existing alternatives.
Learning to Predict Bus Arrival Time From Heterogeneous Measurements via Recurrent Neural Network Bus arrival time prediction intends to improve the level of the services provided by transportation agencies. Intuitively, many stochastic factors affect the predictability of the arrival time, e.g., weather and local events. Moreover, the arrival time prediction for a current station is closely correlated with that of multiple passed stations. Motivated by the observations above, this paper propo...
Hybrid Spatio-Temporal Graph Convolutional Network: Improving Traffic Prediction with Navigation Data Traffic forecasting has recently attracted increasing interest due to the popularity of online navigation services, ridesharing and smart city projects. Owing to the non-stationary nature of road traffic, forecasting accuracy is fundamentally limited by the lack of contextual information. To address this issue, we propose the Hybrid Spatio-Temporal Graph Convolutional Network (H-STGCN), which is able to "deduce" future travel time by exploiting the data of upcoming traffic volume. Specifically, we propose an algorithm to acquire the upcoming traffic volume from an online navigation engine. Taking advantage of the piecewise-linear flow-density relationship, a novel transformer structure converts the upcoming volume into its equivalent in travel time. We combine this signal with the commonly-utilized travel-time signal, and then apply graph convolution to capture the spatial dependency. Particularly, we construct a compound adjacency matrix which reflects the innate traffic proximity. We conduct extensive experiments on real-world datasets. The results show that H-STGCN remarkably outperforms state-of-the-art methods in various metrics, especially for the prediction of non-recurring congestion.
Learning Interpretable Deep State Space Model for Probabilistic Time Series Forecasting. Probabilistic time series forecasting involves estimating the distribution of future based on its history, which is essential for risk management in downstream decision-making. We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks and the dependency is modeled by recurrent neural nets. We take the automatic relevance determination (ARD) view and devise a network to exploit the exogenous variables in addition to time series. In particular, our ARD network can incorporate the uncertainty of the exogenous variables and eventually helps identify useful exogenous variables and suppress those irrelevant for forecasting. The distribution of multi-step ahead forecasts are approximated by Monte Carlo simulation. We show in experiments that our model produces accurate and sharp probabilistic forecasts. The estimated uncertainty of our forecasting also realistically increases over time, in a spontaneous manner.
Transfer Knowledge between Cities The rapid urbanization has motivated extensive research on urban computing. It is critical for urban computing tasks to unlock the power of the diversity of data modalities generated by different sources in urban spaces, such as vehicles and humans. However, we are more likely to encounter the label scarcity problem and the data insufficiency problem when solving an urban computing task in a city where services and infrastructures are not ready or just built. In this paper, we propose a FLexible multimOdal tRAnsfer Learning (FLORAL) method to transfer knowledge from a city where there exist sufficient multimodal data and labels, to this kind of cities to fully alleviate the two problems. FLORAL learns semantically related dictionaries for multiple modalities from a source domain, and simultaneously transfers the dictionaries and labelled instances from the source into a target domain. We evaluate the proposed method with a case study of air quality prediction.
Space-time modeling of traffic flow. This paper discusses the application of space-time autoregressive integrated moving average (STARIMA) methodology for representing traffic flow patterns. Traffic flow data are in the form of spatial time series and are collected at specific locations at constant intervals of time. Important spatial characteristics of the space-time process are incorporated in the STARIMA model through the use of weighting matrices estimated on the basis of the distances among the various locations where data are collected. These matrices distinguish the space-time approach from the vector autoregressive moving average (VARMA) methodology and enable the model builders to control the number of the parameters that have to be estimated. The proposed models can be used for short-term forecasting of space-time stationary traffic-flow processes and for assessing the impact of traffic-flow changes on other parts of the network. The three-stage iterative space-time model building procedure is illustrated using 7.5min average traffic flow data for a set of 25 loop-detectors located at roads that direct to the centre of the city of Athens, Greece. Data for two months with different traffic-flow characteristics are modelled in order to determine the stability of the parameter estimation.
Model-based periodic event-triggered control for linear systems Periodic event-triggered control (PETC) is a control strategy that combines ideas from conventional periodic sampled-data control and event-triggered control. By communicating periodically sampled sensor and controller data only when needed to guarantee stability or performance properties, PETC is capable of reducing the number of transmissions significantly, while still retaining a satisfactory closed-loop behavior. In this paper, we will study observer-based controllers for linear systems and propose advanced event-triggering mechanisms (ETMs) that will reduce communication in both the sensor-to-controller channels and the controller-to-actuator channels. By exploiting model-based computations, the new classes of ETMs will outperform existing ETMs in the literature. To model and analyze the proposed classes of ETMs, we present two frameworks based on perturbed linear and piecewise linear systems, leading to conditions for global exponential stability and @?"2-gain performance of the resulting closed-loop systems in terms of linear matrix inequalities. The proposed analysis frameworks can be used to make tradeoffs between the network utilization on the one hand and the performance in terms of @?"2-gains on the other. In addition, we will show that the closed-loop performance realized by an observer-based controller, implemented in a conventional periodic time-triggered fashion, can be recovered arbitrarily closely by a PETC implementation. This provides a justification for emulation-based design. Next to centralized model-based ETMs, we will also provide a decentralized setup suitable for large-scale systems, where sensors and actuators are physically distributed over a wide area. The improvements realized by the proposed model-based ETMs will be demonstrated using numerical examples.
Affective social robots For human-robot interaction to proceed in a smooth, natural manner, robots must adhere to human social norms. One such human convention is the use of expressive moods and emotions as an integral part of social interaction. Such expressions are used to convey messages such as ''I'm happy to see you'' or ''I want to be comforted,'' and people's long-term relationships depend heavily on shared emotional experiences. Thus, we have developed an affective model for social robots. This generative model attempts to create natural, human-like affect and includes distinctions between immediate emotional responses, the overall mood of the robot, and long-term attitudes toward each visitor to the robot, with a focus on developing long-term human-robot relationships. This paper presents the general affect model as well as particular details of our implementation of the model on one robot, the Roboceptionist. In addition, we present findings from two studies that demonstrate the model's potential.
Rich Models for Steganalysis of Digital Images We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary $\\pm {\\hbox{1}}$ embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.
Heterogeneous ensemble for feature drifts in data streams The nature of data streams requires classification algorithms to be real-time, efficient, and able to cope with high-dimensional data that are continuously arriving. It is a known fact that in high-dimensional datasets, not all features are critical for training a classifier. To improve the performance of data stream classification, we propose an algorithm called HEFT-Stream (H eterogeneous E nsemble with F eature drifT for Data Streams ) that incorporates feature selection into a heterogeneous ensemble to adapt to different types of concept drifts. As an example of the proposed framework, we first modify the FCBF [13] algorithm so that it dynamically update the relevant feature subsets for data streams. Next, a heterogeneous ensemble is constructed based on different online classifiers, including Online Naive Bayes and CVFDT [5]. Empirical results show that our ensemble classifier outperforms state-of-the-art ensemble classifiers (AWE [15] and OnlineBagging [21]) in terms of accuracy, speed, and scalability. The success of HEFT-Stream opens new research directions in understanding the relationship between feature selection techniques and ensemble learning to achieve better classification performance.
Orientation-aware RFID tracking with centimeter-level accuracy. RFID tracking attracts a lot of research efforts in recent years. Most of the existing approaches, however, adopt an orientation-oblivious model. When tracking a target whose orientation changes, those approaches suffer from serious accuracy degradation. In order to achieve target tracking with pervasive applicability in various scenarios, we in this paper propose OmniTrack, an orientation-aware RFID tracking approach. Our study discovers the linear relationship between the tag orientation and the phase change of the backscattered signals. Based on this finding, we propose an orientation-aware phase model to explicitly quantify the respective impact of the read-tag distance and the tag's orientation. OmniTrack addresses practical challenges in tracking the location and orientation of a mobile tag. Our experimental results demonstrate that OmniTrack achieves centimeter-level location accuracy and has significant advantages in tracking targets with varing orientations, compared to the state-of-the-art approaches.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.22
0.22
0.22
0.22
0.22
0.22
0.073333
0.003542
0
0
0
0
0
0
Multi-Graph Convolutional-Recurrent Neural Network (MGC-RNN) for Short-Term Forecasting of Transit Passenger Flow Short-term forecasting of passenger flow is critical for transit management and crowd regulation. Spatial dependencies, temporal dependencies, inter-station correlations driven by other latent factors, and exogenous factors bring challenges to the short-term forecasts of passenger flow of urban rail transit networks. An innovative deep learning approach, Multi-Graph Convolutional-Recurrent Neural Network (MGC-RNN) is proposed to forecast passenger flow in urban rail transit systems to incorporate these complex factors. We propose to use multiple graphs to encode the spatial and other heterogenous inter-station correlations. The temporal dynamics of the inter-station correlations are also modeled via the proposed multi-graph convolutional-recurrent neural network structure. Inflow and outflow of all stations can be collectively predicted with multiple time steps ahead via a sequence to sequence(seq2seq) architecture. The proposed method is applied to the short-term forecasts of passenger flow in Shenzhen Metro, China. The experimental results show that MGC-RNN outperforms the benchmark algorithms in terms of forecasting accuracy. Besides, it is found that the inter-station driven by network distance, network structure, and recent flow patterns are significant factors for passenger flow forecasting. Moreover, the architecture of LSTM-encoder-decoder can capture the temporal dependencies well. In general, the proposed framework could provide multiple views of passenger flow dynamics for fine prediction and exhibit a possibility for multi-source heterogeneous data fusion in the spatiotemporal forecast tasks.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Sensor Planning for a Symbiotic UAV and UGV System for Precision Agriculture We study two new informative path planning problems that are motivated by the use of aerial and ground robots in precision agriculture. The first problem, termed sampling traveling salesperson problem with neighborhoods (SAMPLINGTSPN), is motivated by scenarios in which unmanned ground vehicles (UGVs) are used to obtain time-consuming soil measurements. The input in SAMPLINGTSPN is a set of possib...
Touring a sequence of polygons Given a sequence of k polygons in the plane, a start point s, and a target point, t, we seek a shortest path that starts at s, visits in order each of the polygons, and ends at t. If the polygons are disjoint and convex, we give an algorithm running in time O(kn log (n/k)), where n is the total number of vertices specifying the polygons. We also extend our results to a case in which the convex polygons are arbitrarily intersecting and the subpath between any two consecutive polygons is constrained to lie within a simply connected region; the algorithm uses O(nk2 log n) time. Our methods are simple and allow shortest path queries from s to a query point t to be answered in time O(k log n + m), where m is the combinatorial path length. We show that for nonconvex polygons this "touring polygons" problem is NP-hard.The touring polygons problem is a strict generalization of some classic problems in computational geometry, including the safari problem, the zoo-keeper problem, and the watchman route problem in a simple polygon. Our new results give an order of magnitude improvement in the running times of the safari problem and the watchman route problem: We solve the safari problem in O(n2 log n) time and the watchman route problem (through a fixed point s) in time O(n3 log n), compared with the previous time bounds of O(n3) and O(n4), respectively.
Numerical Comparison of Some Penalty-Based Constraint Handling Techniques in Genetic Algorithms We study five penalty function-based constraint handling techniques to be used with genetic algorithms in global optimization. Three of them, the method of superiority of feasible points, the method of parameter free penalties and the method of adaptive penalties have already been considered in the literature. In addition, we introduce two new modifications of these methods. We compare all the five methods numerically in 33 test problems and report and analyze the results obtained in terms of accuracy, efficiency and reliability. The method of adaptive penalties turned out to be most efficient while the method of parameter free penalties was the most reliable.
Well-Solvable Special Cases of the Traveling Salesman Problem: A Survey. The traveling salesman problem (TSP) belongs to the most basic, most important, and most investigated problems in combinatorial optimization. Although it is an ${\cal NP}$-hard problem, many of its special cases can be solved efficiently in polynomial time. We survey these special cases with emphasis on the results that have been obtained during the decade 1985--1995. This survey complements an earlier survey from 1985 compiled by Gilmore, Lawler, and Shmoys [The Traveling Salesman Problem---A Guided Tour of Combinatorial Optimization, Wiley, Chichester, pp. 87--143].
Rich Vehicle Routing Problem: Survey The Vehicle Routing Problem (VRP) is a well-known research line in the optimization research community. Its different basic variants have been widely explored in the literature. Even though it has been studied for years, the research around it is still very active. The new tendency is mainly focused on applying this study case to real-life problems. Due to this trend, the Rich VRP arises: combining multiple constraints for tackling realistic problems. Nowadays, some studies have considered specific combinations of real-life constraints to define the emerging Rich VRP scopes. This work surveys the state of the art in the field, summarizing problem combinations, constraints defined, and approaches found.
A new approach to solving the multiple traveling salesperson problem using genetic algorithms The multiple traveling salesperson problem (MTSP) involves scheduling m>1 salespersons to visit a set of n>m locations so that each location is visited exactly once while minimizing the total (or maximum) distance traveled by the salespersons. The MTSP is similar to the notoriously difficult traveling salesperson problem (TSP) with the added complication that each location may be visited by any one of the salespersons. Previous studies investigated solving the MTSP with genetic algorithms (GAs) using standard TSP chromosomes and operators. This paper proposes a new GA chromosome and related operators for the MTSP and compares the theoretical properties and computational performance of the proposed technique to previous work. Computational testing shows the new approach results in a smaller search space and, in many cases, produces better solutions than previous techniques.
Approximation algorithms for distance constrained vehicle routing problems We study the distance constrained vehicle routing problem (DVRP) (Laporte et al., Networks 14 (1984), 47–61, Li et al., Oper Res 40 (1992), 790–799): given a set of vertices in a metric space, a specified depot, and a distance bound D, find a minimum cardinality set of tours originating at the depot that covers all vertices, such that each tour has length at most D. This problem is NP-complete, even when the underlying metric is induced by a weighted star. Our main result is a 2-approximation algorithm for DVRP on tree metrics; we also show that no approximation factor better than 1.5 is possible unless P = NP. For the problem on general metrics, we present a $(O(\log {1 \over \varepsilon }),1 + \varepsilon )$ **image** -bicriteria approximation algorithm: i.e., for any ε 0, it obtains a solution violating the length bound by a 1 + ε factor while using at most $O(\log {1 \over \varepsilon })$ **image** times the optimal number of vehicles. © 2011 Wiley Periodicals, Inc. NETWORKS, 2012 © 2012 Wiley Periodicals, Inc.
The Optimal Charging in Wireless Rechargeable Sensor Networks Recent years have witnessed several new promising technologies to power the wireless sensor networks, which motivate some key topics to be revisited. By integrating sensing and computation capabilities to the traditional RFID tags, the Wireless Identification and Sensing Platform (WISP) is an opensource platform acting as a pioneering experimental platform of wireless rechargeable sensor networks. Different from traditional tags, a RFID-based wireless rechargeable sensor node needs to charge its onboard energy storage above a threshold in order to power its sensing, computation and communication components. Consequently, such charging delay imposes a unique design challenge for deploying wireless rechargeable sensor networks. In this paper, we tackle this problem by planning the optimal movement strategy of the mobile RFID reader, such that the time to charge all nodes in the network above their energy threshold is minimized. We first propose an optimal solution using the linear programming method. To further reduce the computational complexity, we then introduce a heuristic solution with a provable approximation ratio of (1 + )=(1 ") by discretizing the charging power on a two-dimensional space. Through extensive evaluations, we demonstrate that our design outperforms the set-cover-based design by an average of 24:7% while the computational complexity is O((N=")2). Finally, we consider two practical issues in system implementation and provide guidelines for parameter setting.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Surrogate-assisted evolutionary computation: Recent advances and future challenges Surrogate-assisted, or meta-model based evolutionary computation uses efficient computational models, often known as surrogates or meta-models, for approximating the fitness function in evolutionary algorithms. Research on surrogate-assisted evolutionary computation began over a decade ago and has received considerably increasing interest in recent years. Very interestingly, surrogate-assisted evolutionary computation has found successful applications not only in solving computationally expensive single- or multi-objective optimization problems, but also in addressing dynamic optimization problems, constrained optimization problems and multi-modal optimization problems. This paper provides a concise overview of the history and recent developments in surrogate-assisted evolutionary computation and suggests a few future trends in this research area.
A novel data hiding for color images based on pixel value difference and modulus function This paper proposes a novel data hiding method using pixel-value difference and modulus function for color image with the large embedding capacity(hiding 810757 bits in a 512 512 host image at least) and a high-visual-quality of the cover image. The proposed method has fully taken into account the correlation of the R, G and B plane of a color image. The amount of information embedded the R plane and the B plane determined by the difference of the corresponding pixel value between the G plane and the median of G pixel value in each pixel block. Furthermore, two sophisticated pixel value adjustment processes are provided to maintain the division consistency and to solve underflow and overflow problems. The most importance is that the secret data are completely extracted through the mathematical theoretical proof.
Modeling taxi driver anticipatory behavior. As part of a wider behavioral agent-based model that simulates taxi drivers' dynamic passenger-finding behavior under uncertainty, we present a model of strategic behavior of taxi drivers in anticipation of substantial time varying demand at locations such as airports and major train stations. The model assumes that, considering a particular decision horizon, a taxi driver decides to transfer to such a destination based on a reward function. The dynamic uncertainty of demand is captured by a time dependent pick-up probability, which is a cumulative distribution function of waiting time. The model allows for information learning by which taxi drivers update their beliefs from past experiences. A simulation on a real road network, applied to test the model, indicates that the formulated model dynamically improves passenger-finding strategies at the airport. Taxi drivers learn when to transfer to the airport in anticipation of the time-varying demand at the airport to minimize their waiting time.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.11
0.1
0.1
0.1
0.1
0.05
0.025
0.000476
0
0
0
0
0
0
VibViz: Organizing, visualizing and navigating vibration libraries With haptics now common in consumer devices, diversity in tactile perception and aesthetic preferences confound haptic designers. End-user customization out of example sets is an obvious solution, but haptic collections are notoriously difficult to explore. This work addresses the provision of easy and highly navigable access to large, diverse sets of vibrotactile stimuli, on the premise that multiple access pathways facilitate discovery and engagement. We propose and examine five disparate organization schemes (taxonomies), describe how we created a 120-item library with diverse functional and affective characteristics, and present VibViz, an interactive tool for end-user library navigation and our own investigation of how different taxonomies can assist navigation. An exploratory user study with and of VibViz suggests that most users gravitate towards an organization based on sensory and emotional terms, but also exposes rich variations in their navigation patterns and insights into the basis of effective haptic library navigation.
Picbreeder: evolving pictures collaboratively online Picbreeder is an online service that allows users to collaboratively evolve images. Like in other Interactive Evolutionary Computation (IEC) programs, users evolve images on Picbreeder by selecting ones that appeal to them to produce a new generation. However, Picbreeder also offers an online community in which to share these images, and most importantly, the ability to continue evolving others' images. Through this process of branching from other images, and through continually increasing image complexity made possible by the NeuroEvolution of Augmenting Topologies (NEAT) algorithm, evolved images proliferate unlike in any other current IEC systems. Participation requires no explicit talent from the users, thereby opening Picbreeder to the entire Internet community. This paper details how Picbreeder encourages innovation, featuring images that were collaboratively evolved.
On the effect of mirroring in the IPOP active CMA-ES on the noiseless BBOB testbed Mirrored mutations and active covariance matrix adaptation are two recent ideas to improve the well-known covariance matrix adaptation evolution strategy (CMA-ES)---a state-of-the-art algorithm for numerical optimization. It turns out that both mechanisms can be implemented simultaneously. In this paper, we investigate the impact of mirrored mutations on the so-called IPOP active CMA-ES. We find that additional mirrored mutations improve the IPOP active CMA-ES statistically significantly, but by only a small margin, on several functions while never a statistically significant performance decline can be observed. Furthermore, experiments on different function instances with some algorithm parameters and stopping criteria changed reveal essentially the same results.
Importance of Matching Physical Friction, Hardness, and Texture in Creating Realistic Haptic Virtual Surfaces. Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
Mulsemedia: State of the Art, Perspectives, and Challenges Mulsemedia—multiple sensorial media—captures a wide variety of research efforts and applications. This article presents a historic perspective on mulsemedia work and reviews current developments in the area. These take place across the traditional multimedia spectrum—from virtual reality applications to computer games—as well as efforts in the arts, gastronomy, and therapy, to mention a few. We also describe standardization efforts, via the MPEG-V standard, and identify future developments and exciting challenges the community needs to overcome.
DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution<sup>*</sup> Recent research has demonstrated the vulnerability of fingerprint recognition systems to dictionary attacks based on MasterPrints. MasterPrints are real or synthetic fingerprints that can fortuitously match with a large number of fingerprints thereby undermining the security afforded by fingerprint systems. Previous work by Roy et al. generated synthetic MasterPrints at the feature-level. In this work we generate complete image-level MasterPrints known as DeepMasterPrints, whose attack accuracy is found to be much superior than that of previous methods. The proposed method, referred to as Latent Variable Evolution, is based on training a Generative Adversarial Network on a set of real fingerprint images. Stochastic search in the form of the Covariance Matrix Adaptation Evolution Strategy is then used to search for latent input variables to the generator network that can maximize the number of impostor matches as assessed by a fingerprint recognizer. Experiments convey the efficacy of the proposed method in generating DeepMasterPrints. The underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
A survey of socially interactive robots This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
IoT-U: Cellular Internet-of-Things Networks Over Unlicensed Spectrum. In this paper, we consider an uplink cellular Internet-of-Things (IoT) network, where a cellular user (CU) can serve as the mobile data aggregator for a cluster of IoT devices. To be specific, the IoT devices can either transmit the sensory data to the base station (BS) directly by cellular communications, or first aggregate the data to a CU through machine-to-machine (M2M) communications before t...
The contourlet transform: an efficient directional multiresolution image representation. The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications. Index Terms-Contourlets, contours, filter banks, geometric image processing, multidirection, multiresolution, sparse representation, wavelets.
Fast identification of the missing tags in a large RFID system. RFID (radio-frequency identification) is an emerging technology with extensive applications such as transportation and logistics, object tracking, and inventory management. How to quickly identify the missing RFID tags and thus their associated objects is a practically important problem in many large-scale RFID systems. This paper presents three novel methods to quickly identify the missing tags in a large-scale RFID system of thousands of tags. Our protocols can reduce the time for identifying all the missing tags by up to 75% in comparison to the state of art.
Finite-approximation-error-based discrete-time iterative adaptive dynamic programming. In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The ...
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.11
0.1
0.1
0.1
0.06
0.016667
0
0
0
0
0
0
0
0
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right-similar to why we study the human brain-and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization (AM), which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN). The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).
An evaluation of direct attacks using fake fingers generated from ISO templates This work reports a vulnerability evaluation of a highly competitive ISO matcher to direct attacks carried out with fake fingers generated from ISO templates. Experiments are carried out on a fingerprint database acquired in a real-life scenario and show that the evaluated system is highly vulnerable to the proposed attack scheme, granting access in over 75% of the attempts (for a high-security operating point). Thus, the study disproves the popular belief of minutiae templates non-reversibility and raises a key vulnerability issue in the use of non-encrypted standard templates. (This article is an extended version of Galbally et al., 2008, which was awarded with the IBM Best Student Paper Award in the track of Biometrics at ICPR 2008).
Fingerprint Matching Using Feature Space Correlation We present a novel fingerprint alignment and matching scheme that utilizes ridge feature maps to represent, align and match fingerprint images. The technique described here obviates the need for extracting minutiae points or the core point to either align or match fingerprint images. The proposed scheme examines the ridge strength (in local neighborhoods of the fingerprint image) at various orientations, using a set of 8 Gabor filters, whose spatial frequencies correspond to the average inter-ridge spacing in fingerprints. A standard deviation map corresponding to the variation in local pixel intensities in each of the 8 filtered images, is generated. The standard deviation map is sampled at regular intervals in both the horizontal and vertical directions, to construct the ridge feature map. The ridge feature map provides a compact fixed-length representation for a fingerprint image. When a query print is presented to the system, the standard deviation map of the query image and the ridge feature map of the template are correlated, in order to determine the translation offsets necessary to align them. Based on the translation offsets, a matching score is generated by computing the Euclidean distance between the aligned feature maps. Feature extraction and matching takes ~ 1 second in a Pentium III, 800 MHz processor. Combining the matching score generated by the proposed technique with that obtained from a minutiae-based matcher results in an overall improvement in performance of a fingerprint matching system.
Are GANs Created Equal? A Large-Scale Study. Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. To overcome some limitations of the current metrics, we also propose several data sets on which precision and recall can be computed. Our experimental results suggest that future GAN research should be based on more systematic and objective evaluation procedures. Finally, we did not find evidence that any of the tested algorithms consistently outperforms the non-saturating GAN introduced in cite{goodfellow2014generative}.
Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to? Evolutionary fuzzy systems are one of the greatest advances within the area of computational intelligence. They consist of evolutionary algorithms applied to the design of fuzzy systems. Thanks to this hybridization, superb abilities are provided to fuzzy modeling in many different data science scenarios. This contribution is intended to comprise a position paper developing a comprehensive analysi...
NIPS 2016 Tutorial: Generative Adversarial Networks. This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.
Predicting Visual Features from Text for Image and Video Caption Retrieval. This paper strives to find amidst a set of sentences the one best describing the content of a given image or video. Different from existing works, which rely on a joint subspace for their image and video caption retrieval, we propose to do so in a visual space exclusively. Apart from this conceptual novelty, we contribute Word2VisualVec , a deep neural network architecture that learns to predict a...
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
Dyme: Dynamic Microservice Scheduling in Edge Computing Enabled IoT In recent years, the rapid development of mobile edge computing (MEC) provides an efficient execution platform at the edge for Internet-of-Things (IoT) applications. Nevertheless, the MEC also provides optimal resources to different microservices, however, underlying network conditions and infrastructures inherently affect the execution process in MEC. Therefore, in the presence of varying network conditions, it is necessary to optimally execute the available task of end users while maximizing the energy efficiency in edge platform and we also need to provide fair Quality-of-Service (QoS). On the other hand, it is necessary to schedule the microservices dynamically to minimize the total network delay and network price. Thus, in this article, unlike most of the existing works, we propose a dynamic microservice scheduling scheme for MEC. We design the microservice scheduling framework mathematically and also discuss the computational complexity of the scheduling algorithm. Extensive simulation results show that the microservice scheduling framework significantly improves the performance metrics in terms of total network delay, average price, satisfaction level, energy consumption rate (ECR), failure rate, and network throughput over other existing baselines.
Analysis of Software Aging in a Web Server Several recent studies have reported & examined the phenomenon that long-running software systems show an increasing failure rate and/or a progressive degradation of their performance. Causes of this phenomenon, which has been referred to as &#34;software aging&#34;, are the accumulation of internal error conditions, and the depletion of operating system resources. A proactive technique called &#34;software r...
Orthogonal moments based on exponent functions: Exponent-Fourier moments. In this paper, we propose a new set of orthogonal moments based on Exponent functions, named Exponent-Fourier moments (EFMs), which are suitable for image analysis and rotation invariant pattern recognition. Compared with Zernike polynomials of the same degree, the new radial functions have more zeros, and these zeros are evenly distributed, this property make EFMs have strong ability in describing image. Unlike Zernike moments, the kernel of computation of EFMs is extremely simple. Theoretical and experimental results show that Exponent-Fourier moments perform very well in terms of image reconstruction capability and invariant recognition accuracy in noise-free, noisy and smooth distortion conditions. The Exponent-Fourier moments can be thought of as generalized orthogonal complex moments.
A Hierarchical Latent Structure for Variational Conversation Modeling. Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling. However, they suffer from the notorious degeneration problem, where the decoders learn to ignore latent variables and reduce to vanilla RNNs. We empirically show that this degeneracy occurs mostly due to two reasons. First, the expressive power of hierarchical RNN decoders is often high enough to model the data using only its decoding distributions without relying on the latent variables. Second, the conditional VAE structure whose generation process is conditioned on a context, makes the range of training targets very sparse; that is, the RNN decoders can easily overfit to the training data ignoring the latent variables. To solve the degeneration problem, we propose a novel model named Variational Hierarchical Conversation RNNs (VHCR), involving two key ideas of (1) using a hierarchical structure of latent variables, and (2) exploiting an utterance drop regularization. With evaluations on two datasets of Cornell Movie Dialog and Ubuntu Dialog Corpus, we show that our VHCR successfully utilizes latent variables and outperforms state-of-the-art models for conversation generation. Moreover, it can perform several new utterance control tasks, thanks to its hierarchical latent structure.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.073531
0.068889
0.068889
0.068889
0.035852
0.022963
0.008444
0.002963
0.000273
0
0
0
0
0
Power assist method for HAL-3 using EMG-based feedback controller We have developed the exoskeletal robotics suite HAL (Hybrid Assistive Leg) which is integrated with human and assists suitable power for lower limb of people with gait disorder. This study proposes the method of assist motion and assist torque to realize a power assist corresponding to the operator's intention. In the method of assist motion, we adopted Phase Sequence control which generates a series of assist motions by transiting some simple basic motions called Phase. We used the feedback controller to adjust the assist torque to maintain myoelectricity signals which were generated while performing the power assist walking. The experiment results showed the effective power assist according to operator's intention by using these control, methods.
Wearable soft sensing suit for human gait measurement Wearable robots based on soft materials will augment mobility and performance of the host without restricting natural kinematics. Such wearable robots will need soft sensors to monitor the movement of the wearer and robot outside the lab. Until now wearable soft sensors have not demonstrated significant mechanical robustness nor been systematically characterized for human motion studies of walking and running. Here, we present the design and systematic characterization of a soft sensing suit for monitoring hip, knee, and ankle sagittal plane joint angles. We used hyper-elastic strain sensors based on microchannels of liquid metal embedded within elastomer, but refined their design with the use of discretized stiffness gradients to improve mechanical durability. We found that these robust sensors could stretch up to 396% of their original lengths, would restrict the wearer by less than 0.17% of any given joint's torque, had gauge factor sensitivities of greater than 2.2, and exhibited less than 2% change in electromechanical specifications through 1500 cycles of loading-unloading. We also evaluated the accuracy and variability of the soft sensing suit by comparing it with joint angle data obtained through optical motion capture. The sensing suit had root mean square (RMS) errors of less than 5° for a walking speed of 0.89 m/s and reached a maximum RMS error of 15° for a running speed of 2.7 m/s. Despite the deviation of absolute measure, the relative repeatability of the sensing suit's joint angle measurements were statistically equivalent to that of optical motion capture at all speeds. We anticipate that wearable soft sensing will also have applications beyond wearable robotics, such as in medical diagnostics and in human-computer interaction.
Gravity-Balancing Leg Orthosis and Its Performance Evaluation In this paper, we propose a device to assist persons with hemiparesis to walk by reducing or eliminating the effects of gravity. The design of the device includes the following features: 1) it is passive, i.e., it does not include motors or actuators, but is only composed of links and springs; 2) it is safe and has a simple patient-machine interface to accommodate variability in geometry and inertia of the subjects. A number of methods have been proposed in the literature to gravity-balance a machine. Here, we use a hybrid method to achieve gravity balancing of a human leg over its range of motion. In the hybrid method, a mechanism is used to first locate the center of mass of the human limb and the orthosis. Springs are then added so that the system is gravity-balanced in every configuration. For a quantitative evaluation of the performance of the device, electromyographic (EMG) data of the key muscles, involved in the motion of the leg, were collected and analyzed. Further experiments involving leg-raising and walking tasks were performed, where data from encoders and force-torque sensors were used to compute joint torques. These experiments were performed on five healthy subjects and a stroke patient. The results showed that the EMG activity from the rectus femoris and hamstring muscles with the device was reduced by 75%, during static hip and knee flexion, respectively. For leg-raising tasks, the average torque for static positioning was reduced by 66.8% at the hip joint and 47.3% at the knee joint; however, if we include the transient portion of the leg-raising task, the average torque at the hip was reduced by 61.3%, and at the knee was increased by 2.7% at the knee joints. In the walking experiment, there was a positive impact on the range of movement at the hip and knee joints, especially for the stroke patient: the range of movement increased by 45% at the hip joint and by 85% at the knee joint. We believe that this orthosis can be potentially used to desig- - n rehabilitation protocols for patients with stroke
Effects of robotic knee exoskeleton on human energy expenditure. A number of studies discuss the design and control of various exoskeleton mechanisms, yet relatively few address the effect on the energy expenditure of the user. In this paper, we discuss the effect of a performance augmenting exoskeleton on the metabolic cost of an able-bodied user/pilot during periodic squatting. We investigated whether an exoskeleton device will significantly reduce the metabolic cost and what is the influence of the chosen device control strategy. By measuring oxygen consumption, minute ventilation, heart rate, blood oxygenation, and muscle EMG during 5-min squatting series, at one squat every 2 s, we show the effects of using a prototype robotic knee exoskeleton under three different noninvasive control approaches: gravity compensation approach, position-based approach, and a novel oscillator-based approach. The latter proposes a novel control that ensures synchronization of the device and the user. Statistically significant decrease in physiological responses can be observed when using the robotic knee exoskeleton under gravity compensation and oscillator-based control. On the other hand, the effects of position-based control were not significant in all parameters although all approaches significantly reduced the energy expenditure during squatting.
Development of an orthosis for walking assistance using pneumatic artificial muscle: a quantitative assessment of the effect of assistance. In recent years, there is an increase in the number of people that require support during walking as a result of a decrease in the leg muscle strength accompanying aging. An important index for evaluating walking ability is step length. A key cause for a decrease in step length is the loss of muscle strength in the legs. Many researchers have designed and developed orthoses for walking assistance. In this study, we advanced the design of an orthosis for walking assistance that assists the forward swing of the leg to increase step length. We employed a pneumatic artificial muscle as the actuator so that flexible assistance with low rigidity can be achieved. To evaluate the performance of the system, we measured the effect of assistance quantitatively. In this study, we constructed a prototype of the orthosis and measure EMG and step length on fitting it to a healthy subject so as to determine the effect of assistance, noting the increase in the obtained step length. Although there was an increase in EMG stemming from the need to maintain body balance during the stance phase, we observed that the EMG of the sartorius muscle, which helps swing the leg forward, decreased, and the strength of the semitendinosus muscle, which restrains the leg against over-assistance, did not increase but decreased. Our experiments showed that the assistance force provided by the developed orthosis is not adequate for the intended task, and the development of a mechanism that provides appropriate assistance is required in the future.
A Physiologist'S Perspective On Robotic Exoskeletons For Human Locomotion Technological advances in robotic hardware and software have enabled powered exoskeletons to move from science fiction to the real world. The objective of this article is to emphasize two main points for future research. First, the design of future devices could be improved by exploiting biomechanical principles of animal locomotion. Two goals in exoskeleton research could particularly benefit from additional physiological perspective: (i) reduction in the metabolic energy expenditure of the user while wearing the device, and (ii) minimization of the power requirements for actuating the exoskeleton. Second, a reciprocal potential exists for robotic exoskeletons to advance our understanding of human locomotor physiology. Experimental data from humans walking and running with robotic exoskeletons could provide important insight into the metabolic cost of locomotion that is impossible to gain with other methods. Given the mutual benefits of collaboration, it is imperative that engineers and physiologists work together in future studies on robotic exoskeletons for human locomotion.
Finite State Control of FES-Assisted Walking with Spring Brake Orthosis This paper presents finite state control (FSC) of paraplegic walking with wheel walker using functional electrical stimulation (FES) with spring brake orthosis (SBO). The work is a first effort towards restoring natural like swing phase in paraplegic gait through a new hybrid orthosis, referred to as spring brake orthosis (SBO). This mechanism simplifies the control task and results in smooth motion and more-natural like trajectory produced by the flexion reflex for gait in spinal cord injured subjects. The study is carried out with a model of humanoid with wheel walker using the Visual Nastran (Vn4D) dynamic simulation software. Stimulated muscle model of quadriceps is developed for knee extension. Fuzzy logic control (FLC) is developed in Matlab/Simulink to regulate the muscle stimulation pulse-width required to drive FES-assisted walking gait and the computed motion is visualised in graphic animation from Vn4D and finite state control is used to control the transaction between all walking states. Finite state control (FSC) is used to control the switching of brakes, FES and spring during walking cycle.
Adaptive Model-Based Myoelectric Control for a Soft Wearable Arm Exosuit: A New Generation of Wearable Robot Control Despite advances in mechatronic design, the widespread adoption of wearable robots for supporting human mobility has been hampered by 1) ergonomic limitations in rigid exoskeletal structures and 2) the lack of human-machine interfaces (HMIs) capable of sensing musculoskeletal states and translating them into robot-control commands. We have developed a framework that combines, for the first time, a model-based HMI with a soft wearable arm exosuit that has the potential to address key limitations in current HMIs and wearable robots. The proposed framework was tested on six healthy subjects who performed elbow rotations across different joint velocities and lifting weights. The results showed that the model-controlled exosuit operated synchronously with biological muscle contraction. Remarkably, the exosuit dynamically modulated mechanical assistance across all investigated loads, thereby displaying adaptive behavior.
Improvement on Nonquadratic Stabilization of Discrete-Time Takagi–Sugeno Fuzzy Systems: Multiple-Parameterization Approach This paper presents the relaxed nonquadratic stabilization conditions of discrete-time Takagi-Sugeno (T-S) fuzzy systems. To do this, we propose a new fuzzy controller and Lyapunov function by generalizing the nonparallel distributed compensation (non-PDC) control law and nonquadratic Lyapunov function, respectively. By exploiting Po??lya's theorem and algebraic properties of a homogeneous polynomials of normalized fuzzy weighting functions, an infinite family of sufficient conditions for the asymptotic stabilizability is derived. These conditions are formulated in the format of linear matrix inequalities (LMIs) and, hence, are numerically tractable via convex programming techniques. Finally, an example is given to illustrate advantages of the proposed method.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Controlled sink mobility for prolonging wireless sensor networks lifetime This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities. More specifically, for WSNs that comprise a large number of statically placed sensor nodes transmitting data to a collection point (the sink), we show that by controlling the sink movements we can obtain remarkable lifetime improvements. In order to determine sink movements, we first define a Mixed Integer Linear Programming (MILP) analytical model whose solution determines those sink routes that maximize network lifetime. Our contribution expands further by defining the first heuristics for controlled sink movements that are fully distributed and localized. Our Greedy Maximum Residual Energy (GMRE) heuristic moves the sink from its current location to a new site as if drawn toward the area where nodes have the highest residual energy. We also introduce a simple distributed mobility scheme (Random Movement or RM) according to which the sink moves uncontrolled and randomly throughout the network. The different mobility schemes are compared through extensive ns2-based simulations in networks with different nodes deployment, data routing protocols, and constraints on the sink movements. In all considered scenarios, we observe that moving the sink always increases network lifetime. In particular, our experiments show that controlling the mobility of the sink leads to remarkable improvements, which are as high as sixfold compared to having the sink statically (and optimally) placed, and as high as twofold compared to uncontrolled mobility.
Early DoS/DDoS Detection Method using Short-term Statistics Early detection methods are required to prevent the DoS / DDoS attacks. The detection methods using the entropy have been classified into the long-term entropy based on the observation of more than 10,000 packets and the short-term entropy that of less than 10,000 packets. The long-term entropy have less fluctuation leading to easy detection of anomaly accesses using the threshold, while having the defects in detection at the early attacking stage and of difficulty to trace the short term attacks. In this paper, we propose and evaluate the DoS/DDoS detection method based on the short-term entropy focusing on the early detection. Firstly, the pre-experiment extracted the effective window width; 50 for DDoS and 500 for slow DoS attacks. Secondly, we showed that classifying the type of attacks can be made possible using the distribution of the average and standard deviation of the entropy. In addition, we generated the pseudo attacking packets under a normal condition to calculate the entropy and carry out a test of significance. When the number of attacking packets is equal to the number of arriving packets, the high detection results with False-negative = 5% was extracted, and the effectiveness of the proposed method was shown.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.025931
0.020495
0.018245
0.014294
0.010948
0.003909
0.00019
0.000091
0
0
0
0
0
0
Rebalancing Bike Sharing Systems: A Multi-source Data Smart Optimization Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.
Incentives and Redistribution in Homogeneous Bike-Sharing Systems with Stations of Finite Capacity Bike-sharing systems are becoming important for urban transportation. In these systems, users arrive at a station, pick up a bike, use it for a while, and then return it to another station of their choice. Each station has a finite capacity: it cannot host more bikes than its capacity. We propose a stochastic model of an homogeneous bike-sharing system and study the effect of the randomness of user choices on the number of problematic stations, i.e., stations that, at a given time, have no bikes available or no available spots for bikes to be returned to. We quantify the influence of the station capacities, and we compute the fleet size that is optimal in terms of minimizing the proportion of problematic stations. Even in a homogeneous city, the system exhibits a poor performance: the minimal proportion of problematic stations is of the order of the inverse of the capacity. We show that simple incentives, such as suggesting users to return to the least loaded station among two stations, improve the situation by an exponential factor. We also compute the rate at which bikes have to be redistributed by trucks for a given quality of service. This rate is of the order of the inverse of the station capacity. For all cases considered, the fleet size that corresponds to the best performance is half of the total number of spots plus a few more, the value of the few more can be computed in closed-form as a function of the system parameters. It corresponds to the average number of bikes in circulation.
Dynamic Bike Reposition: A Spatio-Temporal Reinforcement Learning Approach. Bike-sharing systems are widely deployed in many major cities, while the jammed and empty stations in them lead to severe customer loss. Currently, operators try to constantly reposition bikes among stations when the system is operating. However, how to efficiently reposition to minimize the customer loss in a long period remains unsolved. We propose a spatio-temporal reinforcement learning based bike reposition model to deal with this problem. Firstly, an inter-independent inner-balance clustering algorithm is proposed to cluster stations into groups. Clusters obtained have two properties, i.e. each cluster is inner-balanced and independent from the others. As there are many trikes repositioning in a very large system simultaneously, clustering is necessary to reduce the problem complexity. Secondly, we allocate multiple trikes to each cluster to conduct inner-cluster bike reposition. A spatio-temporal reinforcement learning model is designed for each cluster to learn a reposition policy in it, targeting at minimizing its customer loss in a long period. To learn each model, we design a deep neural network to estimate its optimal long-term value function, from which the optimal policy can be easily inferred. Besides formulating the model in a multi-agent way, we further reduce its training complexity by two spatio-temporal pruning rules. Thirdly, we design a system simulator based on two predictors to train and evaluate the reposition model. Experiments on real-world datasets from Citi Bike are conducted to confirm the effectiveness of our model.
Effective Recycling Planning for Dockless Sharing Bikes. Bike-sharing systems become more and more popular in the urban transportation system, because of their convenience in recent years. However, due to the high daily usage and lack of effective maintenance, the number of bikes in good condition decreases significantly, and vast piles of broken bikes appear in many big cities. As a result, it is more difficult for regular users to get a working bike, which causes problems both economically and environmentally. Therefore, building an effective broken bike prediction and recycling model becomes a crucial task to promote cycling behavior. In this paper, we propose a predictive model to detect the broken bikes and recommend an optimal recycling program based on the large scale real-world sharing bike data. We incorporate the realistic constraints to formulate our problem and introduce a flexible objective function to tune the trade-off between the broken probability and recycled numbers of the bikes. Finally, we provide extensive experimental results and case studies to demonstrate the effectiveness of our approach.
CEM: A Convolutional Embedding Model for Predicting Next Locations The widespread use of positioning devices and cameras has given rise to a deluge of trajectory data (e.g., vehicle passage records and check-in data), offering great opportunities for location prediction. One problem that has received much attention recently is predicting next locations for an object given previous locations. Several location prediction methods based on embedding learning have bee...
Deep Sequence Learning with Auxiliary Information for Traffic Prediction. Predicting traffic conditions from online route queries is a challenging task as there are many complicated interactions over the roads and crowds involved. In this paper, we intend to improve traffic prediction by appropriate integration of three kinds of implicit but essential factors encoded in auxiliary information. We do this within an encoder-decoder sequence learning framework that integrates the following data: 1) offline geographical and social attributes. For example, the geographical structure of roads or public social events such as national celebrations; 2) road intersection information. In general, traffic congestion occurs at major junctions; 3) online crowd queries. For example, when many online queries issued for the same destination due to a public performance, the traffic around the destination will potentially become heavier at this location after a while. Qualitative and quantitative experiments on a real-world dataset from Baidu have demonstrated the effectiveness of our framework.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
DeepFace: Closing the Gap to Human-Level Performance in Face Verification In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.
Markov games as a framework for multi-agent reinforcement learning In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
Scalable and efficient provable data possession. Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client's (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.
Well-Solvable Special Cases of the Traveling Salesman Problem: A Survey. The traveling salesman problem (TSP) belongs to the most basic, most important, and most investigated problems in combinatorial optimization. Although it is an ${\cal NP}$-hard problem, many of its special cases can be solved efficiently in polynomial time. We survey these special cases with emphasis on the results that have been obtained during the decade 1985--1995. This survey complements an earlier survey from 1985 compiled by Gilmore, Lawler, and Shmoys [The Traveling Salesman Problem---A Guided Tour of Combinatorial Optimization, Wiley, Chichester, pp. 87--143].
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
Multi-stream CNN: Learning representations based on human-related regions for action recognition. •Presenting a multi-stream CNN architecture to incorporate multiple complementary features trained in appearance and motion networks.•Demonstrating that using full-frame, human body, and motion-salient body part regions together is effective to improve recognition performance.•Proposing methods to detect the actor and motion-salient body part precisely.•Verifying that high-quality flow is critically important to learn accurate video representations for action recognition.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.11
0.11
0.1
0.1
0.05
0.001429
0
0
0
0
0
0
0
0
Federated Vehicular Transformers and Their Federations: Privacy-Preserving Computing and Cooperation for Autonomous Driving Cooperative computing is promising to enhance the performance and safety of autonomous vehicles benefiting from the increase in the amount, diversity as well as scope of data resources. However, effective and privacy-preserving utilization of multi-modal and multi-source data remains an open challenge during the construction of cooperative mechanisms. Recently, Transformers have demonstrated their potential in the unified representation of multi-modal features, which provides a new perspective for effective representation and fusion of diverse inputs of intelligent vehicles. Federated learning proposes a distributed learning scheme and is hopeful to achieve privacy-secure sharing of data resources among different vehicles. Towards privacy-preserving computing and cooperation in autonomous driving, this paper reviews recent progress of Transformers, federated learning as well as cooperative perception, and proposes a hierarchical structure of Transformers for intelligent vehicles which is comprised of Vehicular Transformers, Federated Vehicular Transformers and the Federation of Vehicular Transformers to exploit their potential in privacy-preserving collaboration.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Personalizable Driver Steering Model Capable of Predicting Driver Behaviors in Vehicle Collision Avoidance Maneuvers. In recent years, significant emphases and efforts have been placed on developing and implementing advanced driver assistance systems (ADAS). These systems need to work with human drivers to increase vehicle occupant safety, control, and performance in both ordinary and emergency driving situations. To aid such cooperation between human drivers and ADAS, driver models are necessary to replicate and...
Analysing user physiological responses for affective video summarisation. Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.
On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes Driver behavioral cues may present a rich source of information and feedback for future intelligent advanced driver-assistance systems (ADASs). With the design of a simple and robust ADAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. Eye gaze may provide a more accurate proxy than head movement for determining driver attention, whereas the measurement of head motion is less cumbersome and more reliable in harsh driving conditions. We use a lane-change intent-prediction system (McCall et al., 2007) to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane-change maneuver at a particular point in the future. Quantitative results from a naturalistic driving study are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction. The addition of eye gaze does not improve performance as much as simpler head dynamics cues. The advantage of head data over eye data is shown to be statistically significant (p
Detection of Driver Fatigue Caused by Sleep Deprivation This paper aims to provide reliable indications of driver drowsiness based on the characteristics of driver-vehicle interaction. A test bed was built under a simulated driving environment, and a total of 12 subjects participated in two experiment sessions requiring different levels of sleep (partial sleep-deprivation versus no sleep-deprivation) before the experiment. The performance of the subjects was analyzed in a series of stimulus-response and routine driving tasks, which revealed the performance differences of drivers under different sleep-deprivation levels. The experiments further demonstrated that sleep deprivation had greater effect on rule-based than on skill-based cognitive functions: when drivers were sleep-deprived, their performance of responding to unexpected disturbances degraded, while they were robust enough to continue the routine driving tasks such as lane tracking, vehicle following, and lane changing. In addition, we presented both qualitative and quantitative guidelines for designing drowsy-driver detection systems in a probabilistic framework based on the paradigm of Bayesian networks. Temporal aspects of drowsiness and individual differences of subjects were addressed in the framework.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
A CRNN module for hand pose estimation. •The input is no longer a single frame, but a sequence of several adjacent frames.•A CRNN module is proposed, which is basically the same as the standard RNN, except that it uses convolutional connection.•When the difference in the feature image of a certain layer is large, it is better to add CRNN / RNN after this layer.•Our method has the lowest error of output compared to the current state-of-the-art methods.
Deep convolutional neural network-based Bernoulli heatmap for head pose estimation Head pose estimation is a crucial problem for many tasks, such as driver attention, fatigue detection, and human behaviour analysis. It is well known that neural networks are better at handling classification problems than regression problems. It is an extremely nonlinear process to let the network output the angle value directly for optimization learning, and the weight constraint of the loss function will be relatively weak. This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image. Our method can achieve the positioning of the head area while estimating the angles of the head. The Bernoulli heatmap makes it possible to construct fully convolutional neural networks without fully connected layers and provides a new idea for the output form of head pose estimation. A deep convolutional neural network (CNN) structure with multiscale representations is adopted to maintain high-resolution information and low-resolution information in parallel. This kind of structure can maintain rich, high-resolution representations. In addition, channelwise fusion is adopted to make the fusion weights learnable instead of simple addition with equal weights. As a result, the estimation is spatially more precise and potentially more accurate. The effectiveness of the proposed method is empirically demonstrated by comparing it with other state-of-the-art methods on public datasets.
Reinforcement learning based data fusion method for multi-sensors In order to improve detection system robustness and reliability, multi-sensors fusion is used in modern air combat. In this paper, a data fusion method based on reinforcement learning is developed for multi-sensors. Initially, the cubic B-spline interpolation is used to solve time alignment problems of multisource data. Then, the reinforcement learning based data fusion (RLBDF) method is proposed to obtain the fusion results. With the case that the priori knowledge of target is obtained, the fusion accuracy reinforcement is realized by the error between fused value and actual value. Furthermore, the Fisher information is instead used as the reward if the priori knowledge is unable to be obtained. Simulations results verify that the developed method is feasible and effective for the multi-sensors data fusion in air combat.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning The literature on short-term traffic flow forecasting has undergone great development recently. Many works, describing a wide variety of different approaches, which very often share similar features and ideas, have been published. However, publications presenting new prediction algorithms usually employ different settings, data sets, and performance measurements, making it difficult to infer a clear picture of the advantages and limitations of each model. The aim of this paper is twofold. First, we review existing approaches to short-term traffic flow forecasting methods under the common view of probabilistic graphical models, presenting an extensive experimental comparison, which proposes a common baseline for their performance analysis and provides the infrastructure to operate on a publicly available data set. Second, we present two new support vector regression models, which are specifically devised to benefit from typical traffic flow seasonality and are shown to represent an interesting compromise between prediction accuracy and computational efficiency. The SARIMA model coupled with a Kalman filter is the most accurate model; however, the proposed seasonal support vector regressor turns out to be highly competitive when performing forecasts during the most congested periods.
TSCA: A Temporal-Spatial Real-Time Charging Scheduling Algorithm for On-Demand Architecture in Wireless Rechargeable Sensor Networks. The collaborative charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles (WCVs) to sensors, providing a new paradigm to prolong network lifetime. Existing techniques on collaborative charging usually take the periodical and deterministic approach, but neglect influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for large-scale WRSNs. In this paper, we develop a temporal-spatial charging scheduling algorithm, namely TSCA, for the on-demand charging architecture. We aim to minimize the number of dead nodes while maximizing energy efficiency to prolong network lifetime. First, after gathering charging requests, a WCV will compute a feasible movement solution. A basic path planning algorithm is then introduced to adjust the charging order for better efficiency. Furthermore, optimizations are made in a global level. Then, a node deletion algorithm is developed to remove low efficient charging nodes. Lastly, a node insertion algorithm is executed to avoid the death of abandoned nodes. Extensive simulations show that, compared with state-of-the-art charging scheduling algorithms, our scheme can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.
A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
Scalable and efficient provable data possession. Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client's (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.
Witness indistinguishable and witness hiding protocols
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Publicly Verifiable Computation of Polynomials Over Outsourced Data With Multiple Sources. Among all types of computations, the polynomial function evaluation is a fundamental, yet an important one due to its wide usage in the engineering and scientific problems. In this paper, we investigate publicly verifiable outsourced computation for polynomial evaluation with the support of multiple data sources. Our proposed verification scheme is universally applicable to all types of polynomial...
Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing. Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.
Differentially private Naive Bayes learning over multiple data sources. For meeting diverse requirements of data analysis, the machine learning classifier has been provided as a tool to evaluate data in many applications. Due to privacy concerns of preventing disclosing sensitive information, data owners often suppress their data for an untrusted trainer to train a classifier. Some existing work proposed privacy-preserving solutions for learning algorithms, which allow a trainer to build a classifier over the data from a single owner. However, they cannot be directly used in the multi-owner setting where each owner is not totally trusted for each other. In this paper, we propose a novel privacy-preserving Naive Bayes learning scheme with multiple data sources. The proposed scheme enables a trainer to train a Naive Bayes classifier over the dataset provided jointly by different data owners, without the help of a trusted curator. The training result can achieve ϵ-differential privacy while the training will not break the privacy of each owner. We implement the prototype of the scheme and conduct corresponding experiment.
Ensuring attribute privacy protection and fast decryption for outsourced data security in mobile cloud computing. Although many users outsource their various data to clouds, data security and privacy concerns are still the biggest obstacles that hamper the widespread adoption of cloud computing. Anonymous attribute-based encryption (anonymous ABE) enables fine-grained access control over cloud storage and preserves receivers’ attribute privacy by hiding attribute information in ciphertexts. However, in existing anonymous ABE work, a user knows whether attributes and a hidden policy match or not only after repeating decryption attempts. And, each decryption usually requires many pairings and the computation overhead grows with the complexity of the access formula. Hence, existing schemes suffer a severe efficiency drawback and are not suitable for mobile cloud computing where users may be resource-constrained.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Interpolating view and scene motion by dynamic view morphing We introduce the problem of view interpolation for dynamic scenes. Our solution to this problem extends the concept of view morphing and retains the practical advantages of that method. We are specifically concerned with interpolating between two reference views captured at different times, so that there is a missing interval of time between when the views were taken. The synthetic interpolations produced by our algorithm portray one possible physically-valid version of what transpired in the scene during the missing time. It is assumed that each object in the original scene underwent a series of rigid translations. Dynamic view morphing can work with widely-spaced reference views, sparse point correspondences, and uncalibrated cameras. When the camera-to-camera transformation can be determined, the synthetic interpolation will portray scene objects moving along straight-line, constant-velocity trajectories in world space
Using noise inconsistencies for blind image forensics A commonly used tool to conceal the traces of tampering is the addition of locally random noise to the altered image regions. The noise degradation is the main cause of failure of many active or passive image forgery detection methods. Typically, the amount of noise is uniform across the entire authentic image. Adding locally random noise may cause inconsistencies in the image's noise. Therefore, the detection of various noise levels in an image may signify tampering. In this paper, we propose a novel method capable of dividing an investigated image into various partitions with homogenous noise levels. In other words, we introduce a segmentation method detecting changes in noise level. We assume the additive white Gaussian noise. Several examples are shown to demonstrate the proposed method's output. An extensive quantitative measure of the efficiency of the noise estimation part as a function of different noise standard deviations, region sizes and various JPEG compression qualities is proposed as well.
Mobile-to-mobile energy replenishment in mission-critical robotic sensor networks Recently, much research effort has been devoted to employing mobile chargers for energy replenishment of the robots in robotic sensor networks. Observing the discrepancy between the charging latency of robots and charger travel distance, we propose a novel tree-based charging schedule for the charger, which minimizes its travel distance without causing the robot energy depletion. We analytically evaluate its performance and show its closeness to the optimal solutions. Furthermore, through a queue-based approach, we provide theoretical guidance on the setting of the remaining energy threshold at which the robots request energy replenishment. This guided setting guarantees the feasibility of the tree-based schedule to return a depletion-free charging schedule. The performance of the tree-based charging schedule is evaluated through extensive simulations. The results show that the charger travel distance can be reduced by around 20%, when compared with the schedule that only considers the robot charging latency.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.053124
0.034172
0.034172
0.034172
0.034172
0.012032
0.004057
0
0
0
0
0
0
0
Inferring fine-grained transport modes from mobile phone cellular signaling data Due to the ubiquity of mobile phones, mobile phone network data (e.g., Call Detail Records, CDR; and cellular signaling data, CSD), which are collected by mobile telecommunication operators for maintenance purposes, allow us to potentially study travel behaviors of a high percentage of the whole population, with full temporal coverage at a comparatively low cost. However, extracting mobility information such as transport modes from these data is very challenging, due to their low spatial accuracy and infrequent/irregular temporal characteristics. Existing studies relying on mobile phone network data mostly employed simple rule-based methods with geographic data, and focused on easy-to-detect transport modes (e.g., train and subway) or coarse-grained modes (e.g., public versus private transport). Meanwhile, due to the lack of ground truth data, evaluation of these methods was not reported, or only for aggregate data, and it is thus unclear how well the existing methods can detect modes of individual trips. This article proposes two supervised methods - one combining rule-based heuristics (RBH) with random forest (RF), and the other combining RBH with a fuzzy logic system - and a third, unsupervised method with RBH and k-medoids clustering, to detect fine-grained transport modes from CSD, particularly subway, train, tram, bike, car, and walk. Evaluation with a labeled ground truth dataset shows that the best performing method is the hybrid one with RBH and RF, where a classification accuracy of 73% is achieved when differentiating these modes. To our knowledge, this is the first study that distinguishes fine-grained transport modes in CSD and validates results with ground truth data. This study may thus inform future CSD-based applications in areas such as intelligent transport systems, urban/transport planning, and smart cities.
Higher-order SVD analysis for crowd density estimation This paper proposes a new method to estimate the crowd density based on the combination of higher-order singular value decomposition (HOSVD) and support vector machine (SVM). We first construct a higher-order tensor with all the images in the training set, and apply HOSVD to obtain a small set of orthonormal basis tensors that can span the principal subspace for all the training images. The coordinate, which best describes an image under this set of orthonormal basis tensors, is computed as the density character vector. Furthermore, a multi-class SVM classifier is designed to classify the extracted density character vectors into different density levels. Compared with traditional methods, we can make significant improvements to crowd density estimation. The experimental results show that the accuracy of our method achieves 96.33%, in which the misclassified images are all concentrated in their neighboring categories.
Crowd density analysis using subspace learning on local binary pattern Crowd density analysis is a crucial component in visual surveillance for security monitoring. This paper proposes a novel approach for crowd density estimation. The main contribution of this paper is two-fold: First, we propose to estimate crowd density at patch level, where the size of each patch varies in such way to compensate the effects of perspective distortions; second, instead of using raw features to represent each patch sample, we propose to learn a discriminant subspace of the high-dimensional Local Binary Pattern (LBP) raw feature vector where samples of different crowd density are optimally separated. The effectiveness of the proposed algorithm is evaluated on PETS dataset, and the results show that effective dimensionality reduction (DR) techniques significantly enhance the classification accuracy. The performance of the proposed framework is also compared to other frequently used features in crowd density estimation. Our proposed algorithm outperforms the state-of-the-art methods with a significant margin.
An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System. With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability.
A spatio‐temporal ensemble method for large‐scale traffic state prediction AbstractAbstractHow to effectively ensemble multiple models while leveraging the spatio‐temporal information is a challenging but practical problem. However, there is no existing ensemble method explicitly designed for spatio‐temporal data. In this paper, a fully convolutional model based on semantic segmentation technology is proposed, termed as spatio‐temporal ensemble net. The proposed method is suitable for grid‐based spatio‐temporal prediction in dense urban areas. Experiments demonstrate that through spatio‐temporal ensemble net, multiple traffic state prediction base models can be combined to improve the prediction accuracy.
An aggregation approach to short-term traffic flow prediction In this paper, an aggregation approach is proposed for traffic flow prediction that is based on the moving average (MA), exponential smoothing (ES), autoregressive MA (ARIMA), and neural network (NN) models. The aggregation approach assembles information from relevant time series. The source time series is the traffic flow volume that is collected 24 h/day over several years. The three relevant time series are a weekly similarity time series, a daily similarity time series, and an hourly time series, which can be directly generated from the source time series. The MA, ES, and ARIMA models are selected to give predictions of the three relevant time series. The predictions that result from the different models are used as the basis of the NN in the aggregation stage. The output of the trained NN serves as the final prediction. To assess the performance of the different models, the naïve, ARIMA, nonparametric regression, NN, and data aggregation (DA) models are applied to the prediction of a real vehicle traffic flow, from which data have been collected at a data-collection point that is located on National Highway 107, Guangzhou, Guangdong, China. The outcome suggests that the DA model obtains a more accurate forecast than any individual model alone. The aggregation strategy can offer substantial benefits in terms of improving operational forecasting.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Distributed Representations, Simple Recurrent Networks, And Grammatical Structure In this paper three problems for a connectionist account of language are considered:1. What is the nature of linguistic representations?2. How can complex structural relationships such as constituent structure be represented?3. How can the apparently open-ended nature of language be accommodated by a fixed-resource system?Using a prediction task, a simple recurrent network (SRN) is trained on multiclausal sentences which contain multiply-embedded relative clauses. Principal component analysis of the hidden unit activation patterns reveals that the network solves the task by developing complex distributed representations which encode the relevant grammatical relations and hierarchical constituent structure. Differences between the SRN state representations and the more traditional pushdown store are discussed in the final section.
Social navigation: techniques for building more usable systems
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
Beamforming for MISO Interference Channels with QoS and RF Energy Transfer We consider a multiuser multiple-input single-output interference channel where the receivers are characterized by both quality-of-service (QoS) and radio-frequency (RF) energy harvesting (EH) constraints. We consider the power splitting RF-EH technique where each receiver divides the received signal into two parts a) for information decoding and b) for battery charging. The minimum required power that supports both the QoS and the RF-EH constraints is formulated as an optimization problem that incorporates the transmitted power and the beamforming design at each transmitter as well as the power splitting ratio at each receiver. We consider both the cases of fixed beamforming and when the beamforming design is incorporated into the optimization problem. For fixed beamforming we study three standard beamforming schemes, the zero-forcing (ZF), the regularized zero-forcing (RZF) and the maximum ratio transmission (MRT); a hybrid scheme, MRT-ZF, comprised of a linear combination of MRT and ZF beamforming is also examined. The optimal solution for ZF beamforming is derived in closed-form, while optimization algorithms based on second-order cone programming are developed for MRT, RZF and MRT-ZF beamforming to solve the problem. In addition, the joint-optimization of beamforming and power allocation is studied using semidefinite programming (SDP) with the aid of rank relaxation.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.016667
0
0
0
0
0
0
0
0
Tag-Ordering Polling Protocols in RFID Systems Future RFID technologies will go far beyond today's widely used passive tags. Battery-powered active tags are likely to gain more popularity due to their long operational ranges and richer on-tag resources. With integrated sensors, these tags can provide not only static identification numbers but also dynamic, real-time information such as sensor readings. This paper studies a general problem of how to design efficient polling protocols to collect such real-time information from a subset of tags in a large RFID system. We show that the standard, straightforward polling design is not energy-efficient because each tag has to continuously monitor the wireless channel and receive tag IDs, which is energy-consuming. Existing work is able to cut the amount of data each tag has to receive by half through a coding design. In this paper, we propose a tag-ordering polling protocol (TOP) that can reduce per-tag energy consumption by more than an order of magnitude. We also reveal an energy-time tradeoff in the protocol design: per-tag energy consumption can be reduced to at the expense of longer execution time of the protocol. We then apply partitioned Bloom filters to enhance the performance of TOP, such that it can achieve much better energy efficiency without degradation in protocol execution time. Finally, we show how to configure the new protocols for time-constrained energy minimization.
STEP: A Time-Efficient Tag Searching Protocol in Large RFID Systems The Radio Frequency IDentification (RFID) technology is greatly revolutionizing applications such as warehouse management and inventory control in retail industry. In large RFID systems, an important and practical issue is tag searching: Given a particular set of tags called wanted tags, tag searching aims to determine which of them are currently present in the system and which are not. As an RFID system usually contains a large number of tags, the intuitive solution that collects IDs of all the tags in the system and compares them with the wanted tag IDs to obtain the result is highly time inefficient. In this paper, we design a novel technique called testing slot, with which a reader can quickly figure out which wanted tags are absent from its interrogation region without tag ID transmissions. The testing slot technique thus greatly reduces transmission overhead during the searching process. Based on this technique, we propose two protocols to perform time-efficient tag searching in practical large RFID systems containing multiple readers. In our protocols, each reader first employs the testing slot technique to obtain its local searching result by iteratively eliminating wanted tags that are absent from its interrogation region. The local searching results of readers are then combined to form the final searching result. The proposed protocols outperform existing solutions in both time efficiency and searching precision. Simulation results show that, compared with the state-of-the-art solution, our best protocol reduces execution time by up to 60 percent, meanwhile promotes the searching precision by nearly an order of magnitude.
Finding Needles in a Haystack: Missing Tag Detection in Large RFID Systems. Radio frequency identification technology has been widely used in missing tag detection to reduce and avoid inventory shrinkage. In this application, promptly finding out the missing event is of paramount importance. However, the existing missing tag detection protocols cannot efficiently handle the presence of a large number of unexpected tags whose IDs are not known to the reader, which shackles...
Efficient and Reliable Missing Tag Identification for Large-Scale RFID Systems With Unknown Tags. Radio frequency identification (RFID), which promotes the rapid development of Internet of Things (IoT), has been an emerging technology and widely deployed in various applications such as warehouse management, supply chain management, and social networks. In such applications, objects can be efficiently managed by attaching them with low-cost RFID tags and carefully monitoring them. The missing o...
An Efficient Bit-Detecting Protocol for Continuous Tag Recognition in Mobile RFID Systems. In a mobile RFID system, a large number of tags move in and out of the system continuously, so that the reader has very limited time to recognize all the tags. As a result, the effective and efficient identification of tags in mobile environments is a more challenging problem compared to conventional static RFID systems. In this paper, we propose an efficient bit-detecting (EBD) protocol to accelerate the reading process of large-scale mobile RFID systems. In these systems, some previously recognized tags, i.e., known tags, may stay in the reader's reading range for two consecutive reading cycles, and some unknown tags may newly participate in the current reading cycle. In the proposed EBD protocol, a new bit monitoring method is proposed to detect the presence of known tags using a small number of slots, and to retrieve their IDs from the back-end database. Next, an M-ary bit-detecting tree recognition method is proposed to rapidly recognize unknown tags without generating any idle slots. This new protocol is shown to perform better than existing methods reported in the literature. Both theoretic and simulation results are present to demonstrate that the proposed protocol is superior to existing protocols in terms of lower time cost.
From M-Ary Query to Bit Query: A New Strategy for Efficient Large-Scale RFID Identification The tag collision avoidance has been viewed as one of the most important research problems in RFID communications and bit tracking technology has been widely embedded in query tree (QT) based algorithms to tackle such challenge. Existing solutions show further opportunity to greatly improve the reading performance because collision queries and empty queries are not fully explored. In this paper, a bit query (BQ) strategy based M-ary query tree protocol (BQMT) is presented, which can not only eliminate idle queries but also separate collided tags into many small subsets and make full use of the collided bits. To further optimize the reading performance, a modified dual prefixes matching (MDPM) mechanism is presented to allow multiple tags to respond in the same slot and thus significantly reduce the number of queries. Theoretical analysis and simulations are supplemented to validate the effectiveness of the proposed BQMT and MDPM, which outperform the existing QT-based algorithms. Also, the BQMT and MDPM can be combined to BQ-MDPM to improve the reading performance in system efficiency, total identification time, communication complexity and average energy cost.
Efficient Unknown Tag Identification Protocols in Large-Scale RFID Systems Owing to its attractive features such as fast identification and relatively long interrogating range over the classical barcode systems, radio-frequency identification (RFID) technology possesses a promising prospect in many practical applications such as inventory control and supply chain management. However, unknown tags appear in RFID systems when the tagged objects are misplaced or unregistered tagged objects are moved in, which often causes huge economic losses. This paper addresses an important and challenging problem of unknown tag identification in large-scale RFID systems. The existing protocols leverage the Aloha-like schemes to distinguish the unknown tags from known tags at the slot level, which are of low time-efficiency, and thus can hardly satisfy the delay-sensitive applications. To fill in this gap, two filtering-based protocols (at the bit level) are proposed in this paper to address the problem of unknown tag identification efficiently. Theoretical analysis of the protocol parameters is performed to minimize the execution time of the proposed protocols. Extensive simulation experiments are conducted to evaluate the performance of the protocols. The results demonstrate that the proposed protocols significantly outperform the currently most promising protocols.
Wireless Body Area Networks: A Survey Recent developments and technological advancements in wireless communication, MicroElectroMechanical Systems (MEMS) technology and integrated circuits has enabled low-power, intelligent, miniaturized, invasive/non-invasive micro and nano-technology sensor nodes strategically placed in or around the human body to be used in various applications, such as personal health monitoring. This exciting new area of research is called Wireless Body Area Networks (WBANs) and leverages the emerging IEEE 802.15.6 and IEEE 802.15.4j standards, specifically standardized for medical WBANs. The aim of WBANs is to simplify and improve speed, accuracy, and reliability of communication of sensors/actuators within, on, and in the immediate proximity of a human body. The vast scope of challenges associated with WBANs has led to numerous publications. In this paper, we survey the current state-of-art of WBANs based on the latest standards and publications. Open issues and challenges within each area are also explored as a source of inspiration towards future developments in WBANs.
LMM: latency-aware micro-service mashup in mobile edge computing environment Internet of Things (IoT) applications introduce a set of stringent requirements (e.g., low latency, high bandwidth) to network and computing paradigm. 5G networks are faced with great challenges for supporting IoT services. The centralized cloud computing paradigm also becomes inefficient for those stringent requirements. Only extending spectrum resources cannot solve the problem effectively. Mobile edge computing offers an IT service environment at the Radio Access Network edge and presents great opportunities for the development of IoT applications. With the capability to reduce latency and offer an improved user experience, mobile edge computing becomes a key technology toward 5G. To achieve abundant sharing, complex IoT applications have been implemented as a set of lightweight micro-services that are distributed among containers over the mobile edge network. How to produce the optimal collocation of suitable micro-service for an application in mobile edge computing environment is an important issue that should be addressed. To address this issue, we propose a latency-aware micro-service mashup approach in this paper. Firstly, the problem is formulated into an integer nonlinear programming. Then, we prove the NP-hardness of the problem by reducing it into the delay constrained least cost problem. Finally, we propose an approximation latency-aware micro-service mashup approach to solve the problem. Experiment results show that the proposed approach achieves a substantial reduction in network resource consumption while still ensuring the latency constraint.
Supervisory control of fuzzy discrete event systems: a formal approach. Fuzzy discrete event systems (DESs) were proposed recently by Lin and Ying [19], which may better cope with the real-world problems of fuzziness, impreciseness, and subjectivity such as those in biomedicine. As a continuation of [19], in this paper, we further develop fuzzy DESs by dealing with supervisory control of fuzzy DESs. More specifically: 1) we reformulate the parallel composition of crisp DESs, and then define the parallel composition of fuzzy DESs that is equivalent to that in [19]. Max-product and max-min automata for modeling fuzzy DESs are considered, 2) we deal with a number of fundamental problems regarding supervisory control of fuzzy DESs, particularly demonstrate controllability theorem and nonblocking controllability theorem of fuzzy DESs, and thus, present the conditions for the existence of supervisors in fuzzy DESs; 3) we analyze the complexity for presenting a uniform criterion to test the fuzzy controllability condition of fuzzy DESs modeled by max-product automata; in particular, we present in detail a general computing method for checking whether or not the fuzzy controllability condition holds, if max-min automata are used to model fuzzy DESs, and by means of this method we can search for all possible fuzzy states reachable from initial fuzzy state in max-min automata. Also, we introduce the fuzzy n-controllability condition for some practical problems, and 4) a number of examples serving to illustrate the applications of the derived results and methods are described; some basic properties related to supervisory control of fuzzy DESs are investigated. To conclude, some related issues are raised for further consideration.
A Model for Understanding How Virtual Reality Aids Complex Conceptual Learning Designers and evaluators of immersive virtual reality systems have many ideas concerning how virtual reality can facilitate learning. However, we have little information concerning which of virtual reality's features provide the most leverage for enhancing understanding or how to customize those affordances for different learning environments. In part, this reflects the truly complex nature of learning. Features of a learning environment do not act in isolation; other factors such as the concepts or skills to be learned, individual characteristics, the learning experience, and the interaction experience all play a role in shaping the learning process and its outcomes. Through Project Science Space, we have been trying to identify, use, and evaluate immersive virtual reality's affordances as a means to facilitate the mastery of complex, abstract concepts. In doing so, we are beginning to understand the interplay between virtual reality's features and other important factors in shaping the learning process and learning outcomes for this type of material. In this paper, we present a general model that describes how we think these factors work together and discuss some of the lessons we are learning about virtual reality's affordances in the context of this model for complex conceptual learning.
A Framework of Joint Mobile Energy Replenishment and Data Gathering in Wireless Rechargeable Sensor Networks Recent years have witnessed the rapid development and proliferation of techniques on improving energy efficiency for wireless sensor networks. Although these techniques can relieve the energy constraint on wireless sensors to some extent, the lifetime of wireless sensor networks is still limited by sensor batteries. Recent studies have shown that energy rechargeable sensors have the potential to provide perpetual network operations by capturing renewable energy from external environments. However, the low output of energy capturing devices can only provide intermittent recharging opportunities to support low-rate data services due to spatial-temporal, geographical or environmental factors. To provide steady and high recharging rates and achieve energy efficient data gathering from sensors, in this paper, we propose to utilize mobility for joint energy replenishment and data gathering. In particular, a multi-functional mobile entity, called SenCarin this paper, is employed, which serves not only as a mobile data collector that roams over the field to gather data via short-range communication but also as an energy transporter that charges static sensors on its migration tour via wireless energy transmissions. Taking advantages of SenCar's controlled mobility, we focus on the joint optimization of effective energy charging and high-performance data collections. We first study this problem in general networks with random topologies. We give a two-step approach for the joint design. In the first step, the locations of a subset of sensors are periodically selected as anchor points, where the SenCar will sequentially visit to charge the sensors at these locations and gather data from nearby sensors in a multi-hop fashion. To achieve a desirable balance between energy replenishment amount and data gathering latency, we provide a selection algorithm to search for a maximum number of anchor points where sensors hold the least battery energy, and meanwhile by visiting them, - he tour length of the SenCar is no more than a threshold. In the second step, we consider data gathering performance when the SenCar migrates among these anchor points. We formulate the problem into a network utility maximization problem and propose a distributed algorithm to adjust data rates at which sensors send buffered data to the SenCar, link scheduling and flow routing so as to adapt to the up-to-date energy replenishing status of sensors. Besides general networks, we also study a special scenario where sensors are regularly deployed. For this case we can provide a simplified solution of lower complexity by exploiting the symmetry of the topology. Finally, we validate the effectiveness of our approaches by extensive numerical results, which show that our solutions can achieve perpetual network operations and provide high network utility.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
A Hierarchical Architecture Using Biased Min-Consensus for USV Path Planning This paper proposes a hierarchical architecture using the biased min-consensus (BMC) method, to solve the path planning problem of unmanned surface vessel (USV). We take the fixed-point monitoring mission as an example, where a series of intermediate monitoring points should be visited once by USV. The whole framework incorporates the low-level layer planning the standard path between any two intermediate points, and the high-level fashion determining their visiting sequence. First, the optimal standard path in terms of voyage time and risk measure is planned by the BMC protocol, given that the corresponding graph is constructed with node state and edge weight. The USV will avoid obstacles or keep a certain distance safely, and arrive at the target point quickly. It is proven theoretically that the state of the graph will converge to be stable after finite iterations, i.e., the optimal solution can be found by BMC with low calculation complexity. Second, by incorporating the constraint of intermediate points, their visiting sequence is optimized by BMC again with the reconstruction of a new virtual graph based on the former planned results. The extensive simulation results in various scenarios also validate the feasibility and effectiveness of our method for autonomous navigation.
1.11
0.11
0.11
0.1
0.1
0.1
0.03
0
0
0
0
0
0
0
A Survey on Explainable Anomaly Detection for Industrial Internet of Things Anomaly detection techniques in the Industrial Internet of Things (IIoT) are driving traditional industries towards an unprecedented level of efficiency, productivity and performance. They are typically developed based on supervised and unsupervised machine learning models. However, some machine learning models are facing “black box” problems, namely the rationale behind the algorithm is not understandable. Recently, several models on explainable anomaly detection have emerged. The “black box” problems have been studied by using such models. But few works focus on applications in the IIoT field, and there is no related review of explainable anomaly detection techniques. In this survey, we provide an overview of explainable anomaly detection techniques in IIoT. We propose a new taxonomy to classify the state-of-the-art explainable anomaly detection techniques into two categories, namely intrinsic based explainable anomaly detection and explainer based explainable anomaly detection. We further discuss the applications of explainable anomaly detection across various IIoT fields. Finally, we suggest future study options in this rapidly expanding subject.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Intelligent Transportation System Security: Impact-Oriented Risk Assessment of in-Vehicle Networks The concept of Intelligent Transportation Systems (ITS) was developed in the 1980s when then upcoming technologies and cognitive information were incorporated into the road infrastructures and vehicles. The ITS infrastructure is critical yet not immune to both physical and cyber threats. Vehicles are cyber-physical systems which are a core component of ITS, can be either a target or a launching point for an attack on the ITS network. Unknown vehicle security vulnerabilities trigger a race among adversaries to exploit the weaknesses and security experts to mitigate the vulnerability. In this study, we identified opportunities for adversaries to take control of the in-vehicle network, which can compromise the safety, privacy, reliability, efficiency, and security of the transportation system. This study contributes in three ways to the literature of ITS security and resiliency. First, we aggregate individual risks that are associated with hacking the in-vehicle network to determine system-level risk. Second, we employ a risk-based model to conduct a qualitative vulnerability-oriented risk assessment. Third, we identify the consequences of hacking the in-vehicle network through a risk-based approach, using an impact-likelihood matrix. The qualitative assessment communicates risk outcomes for policy analysis. The outcome of this study would be of interest and usefulness to policymakers and engineers concerned with the potential vulnerabilities of such critical infrastructures.
Generative Adversarial Networks for Parallel Transportation Systems. Generative Adversaria Networks (GANs) have emerged as a promising and effective mechanism for machine learning due to its recent successful applications. GANs share the same idea of producing, testing, acquiring, and utilizing data as well as knowledge based on artificial systems, computational experiments, and parallel execution of actual and virtual scenarios, as outlined in the theory of parall...
Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges AbstractRecent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of “what to fuse”, “when to fuse”, and “how to fuse” remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
Enhanced Object Detection With Deep Convolutional Neural Networks for Advanced Driving Assistance Object detection is a critical problem for advanced driving assistance systems (ADAS). Recently, convolutional neural networks (CNN) achieved large successes on object detection, with performance improvement over traditional approaches, which use hand-engineered features. However, due to the challenging driving environment (e.g., large object scale variation, object occlusion, and bad light conditions), popular CNN detectors do not achieve very good object detection accuracy over the KITTI autonomous driving benchmark dataset. In this paper, we propose three enhancements for CNN-based visual object detection for ADAS. To address the large object scale variation challenge, deconvolution and fusion of CNN feature maps are proposed to add context and deeper features for better object detection at low feature map scales. In addition, soft non-maximal suppression (NMS) is applied across object proposals at different feature scales to address the object occlusion challenge. As the cars and pedestrians have distinct aspect ratio features, we measure their aspect ratio statistics and exploit them to set anchor boxes properly for better object matching and localization. The proposed CNN enhancements are evaluated with various image input sizes by experiments over KITTI dataset. The experimental results demonstrate the effectiveness of the proposed enhancements with good detection performance over KITTI test set.
Traffic Flow Imputation Using Parallel Data and Generative Adversarial Networks Traffic data imputation is critical for both research and applications of intelligent transportation systems. To develop traffic data imputation models with high accuracy, traffic data must be large and diverse, which is costly. An alternative is to use synthetic traffic data, which is cheap and easy-access. In this paper, we propose a novel approach using parallel data and generative adversarial networks (GANs) to enhance traffic data imputation. Parallel data is a recently proposed method of using synthetic and real data for data mining and data-driven process, in which we apply GANs to generate synthetic traffic data. As it is difficult for the standard GAN algorithm to generate time-dependent traffic flow data, we made twofold modifications: 1) using the real data or the corrupted ones instead of random vectors as latent codes to generator within GANs and 2) introducing a representation loss to measure discrepancy between the synthetic data and the real data. The experimental results on a real traffic dataset demonstrate that our method can significantly improve the performance of traffic data imputation.
ParaUDA: Invariant Feature Learning With Auxiliary Synthetic Samples for Unsupervised Domain Adaptation Recognizing and locating objects by algorithms are essential and challenging issues for Intelligent Transportation Systems. However, the increasing demand for much labeled data hinders the further application of deep learning-based object detection. One of the optimal solutions is to train the target model with an existing dataset and then adapt it to new scenes, namely Unsupervised Domain Adaptation (UDA). However, most of existing methods at the pixel level mainly focus on adapting the model from source domain to target domain and ignore the essence of UDA to learn domain-invariant feature learning. Meanwhile, almost all methods at the feature level ignore to make conditional distributions matched for UDA while conducting feature alignment between source and target domain. Considering these problems, this paper proposes the ParaUDA, a novel framework of learning invariant representations for UDA in two aspects: pixel level and feature level. At the pixel level, we adopt CycleGAN to conduct domain transfer and convert the problem of original unsupervised domain adaptation to supervised domain adaptation. At the feature level, we adopt an adversarial adaption model to learn domain-invariant representation by aligning the distributions of domains between different image pairs with same mixture distributions. We evaluate our proposed framework in different scenes, from synthetic scenes to real scenes, from normal weather to challenging weather, and from scenes across cameras. The results of all the above experiments show that ParaUDA is effective and robust for adapting object detection models from source scenes to target scenes.
China's 12-Year Quest of Autonomous Vehicular Intelligence: The Intelligent Vehicles Future Challenge Program In this article, we introduce the Intelligent Vehicles Future Challenge of China (IVFC), which has lasted 12 years. Some key features of the tests and a few interesting findings of IVFC are selected and presented. Through the IVFCs held between 2009 and 2020, we gradually established a set of theories, methods, and tools to collect tests? data and efficiently evaluate the performance of autonomous vehicles so that we could learn how to improve both the autonomous vehicles and the testing system itself.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
A comparative study of texture measures with classification based on featured distributions This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented
Social Perception and Steering for Online Avatars This paper presents work on a new platform for producing realistic group conversation dynamics in shared virtual environments. An avatar, representing users, should perceive the surrounding social environment just as humans would, and use the perceptual information for driving low level reactive behaviors. Unconscious reactions serve as evidence of life, and can also signal social availability and spatial awareness to others. These behaviors get lost when avatar locomotion requires explicit user control. For automating such behaviors we propose a steering layer in the avatars that manages a set of prioritized behaviors executed at different frequencies, which can be activated or deactivated and combined together. This approach gives us enough flexibility to model the group dynamics of social interactions as a set of social norms that activate relevant steering behaviors. A basic set of behaviors is described for conversations, some of which generate a social force field that makes the formation of conversation groups fluidly adapt to external and internal noise, through avatar repositioning and reorientations. The resulting social group behavior appears relatively robust, but perhaps more importantly, it starts to bring a new sense of relevance and continuity to the virtual bodies that often get separated from the ongoing conversation in the chat window.
Node Reclamation and Replacement for Long-Lived Sensor Networks When deployed for long-term tasks, the energy required to support sensor nodes' activities is far more than the energy that can be preloaded in their batteries. No matter how the battery energy is conserved, once the energy is used up, the network life terminates. Therefore, guaranteeing long-term energy supply has persisted as a big challenge. To address this problem, we propose a node reclamation and replacement (NRR) strategy, with which a mobile robot or human labor called mobile repairman (MR) periodically traverses the sensor network, reclaims nodes with low or no power supply, replaces them with fully charged ones, and brings the reclaimed nodes back to an energy station for recharging. To effectively and efficiently realize the strategy, we develop an adaptive rendezvous-based two-tier scheduling scheme (ARTS) to schedule the replacement/reclamation activities of the MR and the duty cycles of nodes. Extensive simulations have been conducted to verify the effectiveness and efficiency of the ARTS scheme.
Haptic feedback for enhancing realism of walking simulations. In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.
Vehicular Sensing Networks in a Smart City: Principles, Technologies and Applications. Given the escalating population across the globe, it has become paramount to construct smart cities, aiming for improving the management of urban flows relying on efficient information and communication technologies (ICT). Vehicular sensing networks (VSNs) play a critical role in maintaining the efficient operation of smart cities. Naturally, there are numerous challenges to be solved before the w...
Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem Group scheduling problems have attracted much attention owing to their many practical applications. This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time. It is originated from an important industrial process, i.e., wire rod and bar rolling process in steel production systems. Two objecti...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0.006897
0
0
0
0
0
0
Deployment of the Electric Vehicle Charging Station Considering Existing Competitors The problem of optimal planning of plug-in electric vehicle (PEV) charging stations is studied. Different from many other works, we assume the station investor to be a new private entrant into the market who intends to maximize its own profit in a competitive environment. A modified Huff gravity-based model is adopted to describe the probabilistic patronizing behaviors of PEV drivers. Accordingly, a bi-level optimization model is formulated to decide not only the optimal site and size of the new charging station, but also the retail charging prices in the future operation stage. Based on the specific characteristics of the problem, the operation level sub-problem is reformulated to a convex programming and an efficient solution algorithm is designed for the overall bi-level optimization. Numerical examples of different scales demonstrate the effectiveness of the proposed modeling and computation methods, as well as the importance of considering the competitive effects when planning the charging station.
Computational difficulties of bilevel linear programming We show, using small examples, that two algorithms previously published for the Bilevel Linear Programming problem BLP may fail to find the optimal solution and thus must be considered to be heuris...
Competitive charging station pricing for plug-in electric vehicles This paper considers the problem of charging station pricing and station selection of plug-in electric vehicles (PEVs). Every PEV needs to select a charging station by considering the charging prices, waiting times, and travel distances. Each charging station optimizes its charging price based on the prediction of the PEVs' charging station selection decisions, in an attempt to maximize its profit. To obtain insights of such a highly coupled system, we consider a one-dimensional system with two charging stations and Poisson arriving PEVs. We propose a multi-leader-multi-follower Stackelberg game model, in which the charging stations (leaders) announce their charging prices in Stage I, and the PEVs (followers) make their charging station selections in Stage II. We show that there always exists a unique charging station selection equilibrium in Stage II, and such equilibrium depends on the price difference between the charging stations. We then characterize the sufficient conditions for the existence and uniqueness of the pricing equilibrium in Stage I. Unfortunately, it is hard to compute the pricing equilibrium in closed form. To overcome this challenge, we develop a low-complexity algorithm that efficiently computes the pricing equilibrium and the subgame perfect equilibrium of our Stackelberg game with no information exchange.
Placement of EV Charging Stations - Balancing Benefits Among Multiple Entities. This paper studies the problem of multistage placement of electric vehicle (EV) charging stations with incremental EV penetration rates. A nested logit model is employed to analyze the charging preference of the individual consumer (EV owner) and predict the aggregated charging demand at the charging stations. The EV charging industry is modeled as an oligopoly where the entire market is dominated...
An Analysis of Price Competition in Heterogeneous Electric Vehicle Charging Stations In this paper, we investigate the price competition among electric vehicle charging stations (EVCSs) with renewable power generators (RPGs). Both a large-sized EVCS (L-EVCS) and small-sized EVCSs (S-EVCSs), which have different capacities, are considered. Moreover, the price elasticity of electric vehicles (EVs), the effect of the distance between an EV and the EVCSs, and the impact of the number ...
PEV Fast-Charging Station Sizing and Placement in Coupled Transportation-Distribution Networks Considering Power Line Conditioning Capability The locations and sizes of plug-in electric vehicle fast-charging stations (PEVFCS) can affect traffic flow in the urban transportation network (TN) and operation indices in the electrical energy distribution network (DN). PEVFCSs are supplied by the DNs, generally using power electronic devices. Thus, PEVFCSs could be used as power line conditioners, especially as active filters (for mitigating harmonic pollutions) and reactive power compensators. Accordingly, this paper proposes a mixed-integer linear programming model taking into account the traffic impacts and power line conditioning capabilities of PEVFCSs to determine optimal locations and sizes of PEVFCSs. Various load profile patterns and the variation of charging demand during the planning horizon are included in this model to consider different operation states of DN and TN. The proposed model is implemented in GAMS and applied to two standard test systems. Numerical results are provided for the case studies and various scenarios. The results confirm the ability and efficiency of the proposed model and its superiority to the existing models.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
An online mechanism for multi-unit demand and its application to plug-in hybrid electric vehicle charging We develop an online mechanism for the allocation of an expiring resource to a dynamic agent population. Each agent has a non-increasing marginal valuation function for the resource, and an upper limit on the number of units that can be allocated in any period. We propose two versions on a truthful allocation mechanism. Each modifies the decisions of a greedy online assignment algorithm by sometimes cancelling an allocation of resources. One version makes this modification immediately upon an allocation decision while a second waits until the point at which an agent departs the market. Adopting a prior-free framework, we show that the second approach has better worst-case allocative efficiency and is more scalable. On the other hand, the first approach (with immediate cancellation) may be easier in practice because it does not need to reclaim units previously allocated. We consider an application to recharging plug-in hybrid electric vehicles (PHEVs). Using data from a real-world trial of PHEVs in the UK, we demonstrate higher system performance than a fixed price system, performance comparable with a standard, but non-truthful scheduling heuristic, and the ability to support 50% more vehicles at the same fuel cost than a simple randomized policy.
Blockchain Meets IoT: An Architecture for Scalable Access Management in IoT. The Internet of Things (IoT) is stepping out of its infancy into full maturity and establishing itself as a part of the future Internet. One of the technical challenges of having billions of devices deployed worldwide is the ability to manage them. Although access management technologies exist in IoT, they are based on centralized models which introduce a new variety of technical limitations to ma...
Multi-column Deep Neural Networks for Image Classification Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Parallel planning: a new motion planning framework for autonomous driving Motion planning is one of the most significant technologies for autonomous driving. To make motion planning models able to learn from the environment and to deal with emergency situations, a new motion planning framework called as “parallel planning” is proposed in this paper. In order to generate sufficient and various training samples, artificial traffic scenes are firstly constructed based on t...
Multiobjective Optimization Models for Locating Vehicle Inspection Stations Subject to Stochastic Demand, Varying Velocity and Regional Constraints Deciding an optimal location of a transportation facility and automotive service enterprise is an interesting and important issue in the area of facility location allocation (FLA). In practice, some factors, i.e., customer demands, allocations, and locations of customers and facilities, are changing, and thus, it features with uncertainty. To account for this uncertainty, some researchers have addressed the stochastic time and cost issues of FLA. A new FLA research issue arises when decision makers want to minimize the transportation time of customers and their transportation cost while ensuring customers to arrive at their desired destination within some specific time and cost. By taking the vehicle inspection station as a typical automotive service enterprise example, this paper presents a novel stochastic multiobjective optimization to address it. This work builds two practical stochastic multiobjective programs subject to stochastic demand, varying velocity, and regional constraints. A hybrid intelligent algorithm integrating stochastic simulation and multiobjective teaching-learning-based optimization algorithm is proposed to solve the proposed programs. This approach is applied to a real-world location problem of a vehicle inspection station in Fushun, China. The results show that this is able to produce satisfactory Pareto solutions for an actual vehicle inspection station location problem.
Intrinsic dimension estimation: Advances and open problems. •The paper reviews state-of-the-art of the methods of Intrinsic Dimension (ID) Estimation.•The paper defines the properties that an ideal ID estimator should have.•The paper reviews, under the above mentioned framework, the major ID estimation methods underlining their advances and the open problems.
Alignment-Supervised Bidimensional Attention-Based Recursive Autoencoders for Bilingual Phrase Representation. Exploiting semantic interactions between the source and target linguistic items at different levels of granularity is crucial for generating compact vector representations for bilingual phrases. To achieve this, we propose alignment-supervised bidimensional attention-based recursive autoencoders (ABattRAE) in this paper. ABattRAE first individually employs two recursive autoencoders to recover hierarchical tree structures of bilingual phrase, and treats the subphrase covered by each node on the tree as a linguistic item. Unlike previous methods, ABattRAE introduces a bidimensional attention network to measure the semantic matching degree between linguistic items of different languages, which enables our model to integrate information from all nodes by dynamically assigning varying weights to their corresponding embeddings. To ensure the accuracy of the generated attention weights in the attention network, ABattRAE incorporates word alignments as supervision signals to guide the learning procedure. Using the general stochastic gradient descent algorithm, we train our model in an end-to-end fashion, where the semantic similarity of translation equivalents is maximized while the semantic similarity of nontranslation pairs is minimized. Finally, we incorporate a semantic feature based on the learned bilingual phrase representations into a machine translation system for better translation selection. Experimental results on NIST Chinese–English and WMT English–German test sets show that our model achieves substantial improvements of up to 2.86 and 1.09 BLEU points over the baseline, respectively. Extensive in-depth analyses demonstrate the superiority of our model in learning bilingual phrase embeddings.
MOELS: Multiobjective Evolutionary List Scheduling for Cloud Workflows Cloud computing has nowadays become a dominant technology to reduce the computation cost by elastically providing resources to users on a pay-per-use basis. More and more scientific and business applications represented by workflows have been moved or are in active transition to cloud platforms. Therefore, efficient cloud workflow scheduling methods are in high demand. This paper investigates how to simultaneously optimize makespan and economical cost for workflow scheduling in clouds and proposes a multiobjective evolutionary list scheduling (MOELS) algorithm to address it. It embeds the classic list scheduling into a powerful multiobjective evolutionary algorithm (MOEA): a genome is represented by a scheduling sequence and a preference weight and is interpreted to a scheduling solution via a specifically designed list scheduling heuristic, and the genomes in the population are evolved through tailored genetic operators. The simulation experiments with the real-world data show that MOELS outperforms some state-of-the-art methods as it can always achieve a higher hypervolume (HV) value. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</italic> —This paper describes a novel method called MOELS for minimizing both costs and makespan when deploying a workflow into a cloud datacenter. MOELS seamlessly combines a list scheduling heuristic and an evolutionary algorithm to have complementary advantages. It is compared with two state-of-the-art algorithms MOHEFT (multiobjective heterogeneous earliest finish time) and EMS-C (evolutionary multiobjective scheduling for cloud) in the simulation experiments. The results show that the average hypervolume value from MOELS is 3.42% higher than that of MOHEFT, and 2.27% higher than that of EMS-C. The runtime that MOELS requires rises moderately as a workflow size increases.
Data-Driven Evolutionary Optimization: An Overview and Case Studies Most evolutionary optimization algorithms assume that the evaluation of the objective and constraint functions is straightforward. In solving many real-world optimization problems, however, such objective functions may not exist. Instead, computationally expensive numerical simulations or costly physical experiments must be performed for fitness evaluations. In more extreme cases, only historical data are available for performing optimization and no new data can be generated during optimization. Solving evolutionary optimization problems driven by data collected in simulations, physical experiments, production processes, or daily life are termed data-driven evolutionary optimization. In this paper, we provide a taxonomy of different data driven evolutionary optimization problems, discuss main challenges in data-driven evolutionary optimization with respect to the nature and amount of data, and the availability of new data during optimization. Real-world application examples are given to illustrate different model management strategies for different categories of data-driven optimization problems.
A multi-fidelity surrogate-model-assisted evolutionary algorithm for computationally expensive optimization problems. Integrating data-driven surrogate models and simulation models of different accuracies (or fidelities) in a single algorithm to address computationally expensive global optimization problems has recently attracted considerable attention. However, handling discrepancies between simulation models with multiple fidelities in global optimization is a major challenge. To address it, the two major contributions of this paper include: (1) development of a new multi-fidelity surrogate-model-based optimization framework, which substantially improves reliability and efficiency of optimization compared to many existing methods, and (2) development of a data mining method to address the discrepancy between the low- and high-fidelity simulation models. A new efficient global optimization method is then proposed, referred to as multi-fidelity Gaussian process and radial basis function-model-assisted memetic differential evolution. Its advantages are verified by mathematical benchmark problems and a real-world antenna design automation problem. Crown Copyright (c) 2015 Published by Elsevier B.V. All rights reserved.
Surrogate-Assisted Cooperative Swarm Optimization of High-Dimensional Expensive Problems. Surrogate models have shown to be effective in assisting metaheuristic algorithms for solving computationally expensive complex optimization problems. The effectiveness of existing surrogate-assisted metaheuristic algorithms, however, has only been verified on low-dimensional optimization problems. In this paper, a surrogate-assisted cooperative swarm optimization algorithm is proposed, in which a...
New approach using ant colony optimization with ant set partition for fuzzy control design applied to the ball and beam system. In this paper we describe the design of a fuzzy logic controller for the ball and beam system using a modified Ant Colony Optimization (ACO) method for optimizing the type of membership functions, the parameters of the membership functions and the fuzzy rules. This is achieved by applying a systematic and hierarchical optimization approach modifying the conventional ACO algorithm using an ant set partition strategy. The simulation results show that the proposed algorithm achieves better results than the classical ACO algorithm for the design of the fuzzy controller.
Social navigation support in a course recommendation system The volume of course-related information available to students is rapidly increasing. This abundance of information has created the need to help students find, organize, and use resources that match their individual goals, interests, and current knowledge. Our system, CourseAgent, presented in this paper, is an adaptive community-based hypermedia system, which provides social navigation course recommendations based on students’ assessment of course relevance to their career goals. CourseAgent obtains students’ explicit feedback as part of their natural interactivity with the system. This work presents our approach to eliciting explicit student feedback and then evaluates this approach.
A Certificateless Authenticated Key Agreement Protocol for Digital Rights Management System.
Device self-calibration in location systems using signal strength histograms Received signal strength RSS fingerprinting is an attractive solution for indoor positioning using Wireless Local Area Network WLAN due to the wide availability of WLAN access points and the ease of monitoring RSS measurements on WLAN-enabled mobile devices. Fingerprinting systems rely on a radiomap collected using a reference device inside the localisation area; however, a major limitation is that the quality of the location information can be degraded if the user carries a different device. This is because diverse devices tend to report the RSS values very differently for a variety of reasons. To ensure compatibility with the existing radiomap, we propose a self-calibration method that attains a good mapping between the reference and user devices using RSS histograms. We do so by relating the RSS histogram of the reference device, which is deduced from the radiomap, and the RSS histogram of the user device, which is updated concurrently with positioning. Unlike other approaches, our calibration method does not require any user intervention, e.g. collecting calibration data using the new device prior to positioning. Experimental results with five smartphones in a real indoor environment demonstrate the effectiveness of the proposed method and indicate that it is more robust to device diversity compared with other calibration methods in the literature.
Substituting Motion Effects with Vibrotactile Effects for 4D Experiences. In this paper, we present two methods to substitute motion effects using vibrotactile effects in order to improve the 4D experiences of viewers. This work was motivated by the needs of more affordable 4D systems for individual users. Our sensory substitution algorithms convert motion commands to vibrotactile commands to a grid display that uses multiple actuators. While one method is based on the fundamental principle of vestibular feedback, the other method makes use of intuitive visually-based mapping from motion to vibrotactile stimulation. We carried out a user study and could confirm the effectiveness of our substitution methods in improving 4D experiences. To our knowledge, this is the first study that investigated the feasibility of replacing motion effects using much simpler and less expensive vibrotactile effects.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0.033333
0
0
0
0
0
0
Heuristic data dissemination for mobile sink networks Mobile sinks have advantages such as solving the hotspot problem which occurs because some sensor nodes deplete energy faster than others and enhancing energy balance among the sensor nodes. However, sink mobility raises two challenges, frequent location updates and packets delivery delay. To tackle these two problems, this work proposes a protocol that involves two mechanisms. First, to reduce the frequent location updates of the mobile sink to all sensor nodes, we proposed an accessible virtual structure (Double Ring) that acts as an intermediate structure between the sink and the sensor nodes when exchanging metadata and query packets. Second, to accelerate packets delivery, we proposed a heuristic data dissemination protocol, called HDD, in which the heuristic function is based on four values, direction, transmission distance, perpendicular distance, and residual energy. The experimental results showed that our proposed protocol outperforms the state of art protocols in terms of energy consumption, delivery rate, latency, and network lifetime.
IoT Elements, Layered Architectures and Security Issues: A Comprehensive Survey. The use of the Internet is growing in this day and age, so another area has developed to use the Internet, called Internet of Things (IoT). It facilitates the machines and objects to communicate, compute and coordinate with each other. It is an enabler for the intelligence affixed to several essential features of the modern world, such as homes, hospitals, buildings, transports and cities. The security and privacy are some of the critical issues related to the wide application of IoT. Therefore, these issues prevent the wide adoption of the IoT. In this paper, we are presenting an overview about different layered architectures of IoT and attacks regarding security from the perspective of layers. In addition, a review of mechanisms that provide solutions to these issues is presented with their limitations. Furthermore, we have suggested a new secure layered architecture of IoT to overcome these issues.
A Multicharger Cooperative Energy Provision Algorithm Based On Density Clustering In The Industrial Internet Of Things Wireless sensor networks (WSNs) are an important core of the Industrial Internet of Things (IIoT). Wireless rechargeable sensor networks (WRSNs) are sensor networks that are charged by mobile chargers (MCs), and can achieve self-sufficiency. Therefore, the development of WRSNs has begun to attract widespread attention in recent years. Most of the existing energy replenishment algorithms for MCs use one or more MCs to serve the whole network in WRSNs. However, a single MC is not suitable for large-scale network environments, and multiple MCs make the network cost too high. Thus, this paper proposes a collaborative charging algorithm based on network density clustering (CCA-NDC) in WRSNs. This algorithm uses the mean-shift algorithm based on density to cluster, and then the mother wireless charger vehicle (MWCV) carries multiple sub wireless charger vehicles (SWCVs) to charge the nodes in each cluster by using a gradient descent optimization algorithm. The experimental results confirm that the proposed algorithm can effectively replenish the energy of the network and make the network more stable.
Dynamic Charging Scheme Problem With Actor–Critic Reinforcement Learning The energy problem is one of the most important challenges in the application of sensor networks. With the development of wireless charging technology and intelligent mobile charger (MC), the energy problem can be solved by the wireless charging strategy. In the practical application of wireless rechargeable sensor networks (WRSNs), the energy consumption rate of nodes is dynamically changed due to many uncertainties, such as the death and different transmission tasks of sensor nodes. However, existing works focus on on-demand schemes, which not fully consider real-time global charging scheduling. In this article, a novel dynamic charging scheme (DCS) in WRSN based on the actor-critic reinforcement learning (ACRL) algorithm is proposed. In the ACRL, we introduce gated recurrent units (GRUs) to capture the relationships of charging actions in time sequence. Using the actor network with one GRU layer, we can pick up an optimal or near-optimal sensor node from candidates as the next charging target more quickly and speed up the training of the model. Meanwhile, we take the tour length and the number of dead nodes as the reward signal. Actor and critic networks are updated by the error criterion function of R and V. Compared with current on-demand charging scheduling algorithms, extensive simulations show that the proposed ACRL algorithm surpasses heuristic algorithms, such as the Greedy, DP, nearest job next with preemption, and TSCA in the average lifetime and tour length, especially against the size and complexity increasing of WRSNs.
Adaptive Wireless Power Transfer in Mobile Ad Hoc Networks. We investigate the interesting impact of mobility on the problem of efficient wireless power transfer in ad hoc networks. We consider a set of mobile agents (consuming energy to perform certain sensing and communication tasks), and a single static charger (with finite energy) which can recharge the agents when they get in its range. In particular, we focus on the problem of efficiently computing the appropriate range of the charger with the goal of prolonging the network lifetime. We first demonstrate (under the realistic assumption of fixed energy supplies) the limitations of any fixed charging range and, therefore, the need for (and power of) a dynamic selection of the charging range, by adapting to the behavior of the mobile agents which is revealed in an online manner. We investigate the complexity of optimizing the selection of such an adaptive charging range, by showing that two simplified offline optimization problems (closely related to the online one) are NP-hard. To effectively address the involved performance trade-offs, we finally present a variety of adaptive heuristics, assuming different levels of agent information regarding their mobility and energy.
Multi-Mc Charging Schedule Algorithm With Time Windows In Wireless Rechargeable Sensor Networks The limited lifespan of the traditional Wireless Sensor Networks (WSNs) has always restricted the broad application and development of WSNs. The current studies have shown that the wireless power transmission technology can effectively prolong the lifetime of WSNs. In most present studies on charging schedules, the sensor nodes will be charged once they have energy consumption, which will cause higher cost and lower networks utility. It is assumed in this paper that the sensor nodes in Wireless Rechargeable Sensor Networks (WRSNs) will be charged only after its energy is lower than a certain value. Each node has a charging time window and is charged within its respective time window. In large-scale wireless sensor networks, single mobile charger (MC) is difficult to ensure that all sensor nodes work properly. Therefore, it is propoesd in this paper that the multiple MCs which are used to replenish energy for the sensor nodes. When the average energy of all the sensor nodes falls below the upper energy threshold, each MC begins to charge the sensor nodes. The genetic algorithm has a great advantage in solving optimization problems. However, it could easily lead to inadequate search. Therefore, the genetic algorithm is improved by 2-opt strategy. And then multi-MC charging schedule algorithm with time windows based on genetic algorithm is proposed and simulated. The simulation results show that the algorithm designed in this paper can timely replenish energy for each sensor node and minimize the total charging cost.
Evaluating the On-Demand Mobile Charging in Wireless Sensor Networks Recently, adopting mobile energy chargers to replenish the energy supply of sensor nodes in wireless sensor networks has gained increasing attention from the research community. Different from energy harvesting systems, the utilization of mobile energy chargers is able to provide more reliable energy supply than the dynamic energy harvested from the surrounding environment. While pioneering works on the mobile recharging problem mainly focus on the optimal offline path planning for the mobile chargers, in this work, we aim to lay the theoretical foundation for the on-demand mobile charging problem, where individual sensor nodes request charging from the mobile charger when their energy runs low. Specifically, in this work we analyze the On-Demand Mobile Charging (DMC) problem using a simple but efficient Nearest-Job-Next with Preemption (NJNP) discipline for the mobile charger, and provide analytical results on the system throughput and charging latency from the perspectives of the mobile charger and individual sensor nodes, respectively. To demonstrate how the actual system design can benefit from our analytical results, we present two examples on determining the essential system parameters such as the optimal remaining energy level for individual sensor nodes to send out their recharging requests and the minimal energy capacity required for the mobile charger. Through extensive simulation with real-world system settings, we verify that our analytical results match the simulation results well and the system designs based on our analysis are effective.
Recognizing daily activities with RFID-based sensors We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readers detect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with precision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring.
Digital games in the classroom? A contextual approach to teachers' adoption intention of digital games in formal education Interest in using digital games for formal education has steadily increased in the past decades. When it comes to actual use, however, the uptake of games in the classroom remains limited. Using a contextual approach, the possible influence of factors on a school (N=60) and teacher (N=409) level are analyzed. Findings indicate that there is no effect of factors on the school level whereas on a teacher level, a model is tested, explaining 68% of the variance in behavioral intention, in which curriculum-relatedness and previous experience function as crucial determinants of the adoption intention. These findings add to previous research on adoption determinants related to digital games in formal education. Furthermore, they provide insight into the relations between different adoption determinants and their association with behavioral intention.
Are we ready for autonomous driving? The KITTI vision benchmark suite Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.
Piecewise linear mapping functions for image registration A new approach to determination of mapping functions for registration of digital images is presented. Given the coordinates of corresponding control points in two images of the same scene, first the images are divided into triangular regions by triangulating the control points. Then a linear mapping function is obtained by registering each pair of corresponding triangular regions in the images. The overall mapping function is then obtained by piecing together the linear mapping functions.
RECIFE-MILP: An Effective MILP-Based Heuristic for the Real-Time Railway Traffic Management Problem The real-time railway traffic management problem consists of selecting appropriate train routes and schedules for minimizing the propagation of delay in case of traffic perturbation. In this paper, we tackle this problem by introducing RECIFE-MILP, a heuristic algorithm based on a mixed-integer linear programming model. RECIFE-MILP uses a model that extends one we previously proposed by including additional elements characterizing railway reality. In addition, it implements performance boosting methods selected among several ones through an algorithm configuration tool. We present a thorough experimental analysis that shows that the performances of RECIFE-MILP are better than the ones of the currently implemented traffic management strategy. RECIFE-MILP often finds the optimal solution to instances within the short computation time available in real-time applications. Moreover, RECIFE-MILP is robust to its configuration if an appropriate selection of the combination of boosting methods is performed.
Flymap: Interacting With Maps Projected From A Drone Interactive maps have become ubiquitous in our daily lives, helping us reach destinations and discovering our surroundings. Yet, designing map interactions is not straightforward and depends on the device being used. As mobile devices evolve and become independent from users, such as with robots and drones, how will we interact with the maps they provide? We propose FlyMap as a novel user experience for drone-based interactive maps. We designed and developed three interaction techniques for FlyMap's usage scenarios. In a comprehensive indoor study (N = 16), we show the strengths and weaknesses of two techniques on users' cognition, task load, and satisfaction. FlyMap was then pilot tested with the third technique outdoors in real world conditions with four groups of participants (N = 13). We show that FlyMap's interactivity is exciting to users and opens the space for more direct interactions with drones.
Design and Validation of a Cable-Driven Asymmetric Back Exosuit Lumbar spine injuries caused by repetitive lifting rank as the most prevalent workplace injury in the United States. While these injuries are caused by both symmetric and asymmetric lifting, asymmetric is often more damaging. Many back devices do not address asymmetry, so we present a new system called the Asymmetric Back Exosuit (ABX). The ABX addresses this important gap through unique design geometry and active cable-driven actuation. The suit allows the user to move in a wide range of lumbar trajectories while the “X” pattern cable routing allows variable assistance application for these trajectories. We also conducted a biomechanical analysis in OpenSim to map assistive cable force to effective lumbar torque assistance for a given trajectory, allowing for intuitive controller design in the lumbar joint space over the complex kinematic chain for varying lifting techniques. Human subject experiments illustrated that the ABX reduced lumbar erector spinae muscle activation during symmetric and asymmetric lifting by an average of 37.8% and 16.0%, respectively, compared to lifting without the exosuit. This result indicates the potential for our device to reduce lumbar injury risk.
1.24
0.24
0.24
0.24
0.24
0.12
0.01386
0.00025
0
0
0
0
0
0
Analysis of Network Coverage Optimization Based on Feedback K-Means Clustering and Artificial Fish Swarm Algorithm. There is a certain energy loss in the process of wireless sensor network information collection. Moreover, the current network protocols and network coverage methods are not sufficient to effectively reduce system energy consumption. In order to improve the operating efficiency and service life of wireless sensor networks, this study analyzes the classic LEACH protocol, summarizes the advantages and disadvantages, and proposes a targeted clustering method based on the K-means algorithm. At the same time, in order to maximize the network coverage and minimize the energy consumption on the basis of ensuring the quality of service, a wireless sensor network coverage optimization method based on an improved artificial fish swarm algorithm was proposed. In addition, a controlled experiment is designed to analyze the effectiveness and practical effects of the proposed algorithm. The experimental results show that the method proposed in this paper has certain advantages over traditional methods and can provide theoretical references for subsequent related research.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
A dynamic N threshold prolong lifetime method for wireless sensor nodes. Ubiquitous computing is a technology to assist many computers available around the physical environment at any place and anytime. This service tends to be invisible from users in everyday life. Ubiquitous computing uses sensors extensively to provide important information such that applications can adjust their behavior. A Wireless Sensor Network (WSN) has been applied to implement such an architecture. To ensure continuous service, a dynamic N threshold power saving method for WSN is developed. A threshold N has been derived to obtain minimum power consumption for the sensor node while considering each different data arrival rate. We proposed a theoretical analysis regarding the probability variation for each state considering different arrival rate, service rate and collision probability. Several experiments have been conducted to demonstrate the effectiveness of our research. Our method can be applied to prolong the service time of a ubiquitous computing network to cope with the network disconnection issue.
Fuzzy Mathematical Programming and Self-Adaptive Artificial Fish Swarm Algorithm for Just-in-Time Energy-Aware Flow Shop Scheduling Problem With Outsourcing Option Flow shop scheduling (FSS) problem constitutes a major part of production planning in every manufacturing organization. It aims at determining the optimal sequence of processing jobs on available machines within a given customer order. In this article, a novel biobjective mixed-integer linear programming (MILP) model is proposed for FSS with an outsourcing option and just-in-time delivery in order to simultaneously minimize the total cost of the production system and total energy consumption. Each job is considered to be either scheduled in-house or to be outsourced to one of the possible subcontractors. To efficiently solve the problem, a hybrid technique is proposed based on an interactive fuzzy solution technique and a self-adaptive artificial fish swarm algorithm (SAAFSA). The proposed model is treated as a single objective MILP using a multiobjective fuzzy mathematical programming technique based on the ε-constraint, and SAAFSA is then applied to provide Pareto optimal solutions. The obtained results demonstrate the usefulness of the suggested methodology and high efficiency of the algorithm in comparison with CPLEX solver in different problem instances. Finally, a sensitivity analysis is implemented on the main parameters to study the behavior of the objectives according to the real-world conditions.
Energy-Efficient Relay-Selection-Based Dynamic Routing Algorithm for IoT-Oriented Software-Defined WSNs In this article, a dynamic routing algorithm based on energy-efficient relay selection (RS), referred to as DRA-EERS, is proposed to adapt to the higher dynamics in time-varying software-defined wireless sensor networks (SDWSNs) for the Internet-of-Things (IoT) applications. First, the time-varying features of SDWSNs are investigated from which the state-transition probability (STP) of the node is calculated based on a Markov chain. Second, a dynamic link weight is designed for DRA-EERS by incorporating both the link reward and the link cost, where the link reward is related to the link energy efficiency (EE) and the node STP, while the link cost is affected by the locations of nodes. Moreover, one adjustable coefficient is used to balance the link reward and the link cost. Finally, the energy-efficient routing problem can be formulated as an optimization problem, and DRA-EERS is performed to find the best relay according to the energy-efficient RS criteria derived from the designed link weight. The simulation results demonstrate that the path EE obtained by DRA-EERS through an available coefficient adjustment outperforms that by Dijkstra's shortest path algorithm. Again, a tradeoff between the EE and the throughput can be achieved by adjusting the coefficient of the link weight, i.e., increasing the impact of the link reward to improve the EE, and otherwise, to improve the throughput.
Energy-Efficient Optimization for Wireless Information and Power Transfer in Large-Scale MIMO Systems Employing Energy Beamforming In this letter, we consider a large-scale multiple-input multiple-output (MIMO) system where the receiver should harvest energy from the transmitter by wireless power transfer to support its wireless information transmission. The energy beamforming in the large-scale MIMO system is utilized to address the challenging problem of long-distance wireless power transfer. Furthermore, considering the limitation of the power in such a system, this letter focuses on the maximization of the energy efficiency of information transmission (bit per Joule) while satisfying the quality-of-service (QoS) requirement, i.e. delay constraint, by jointly optimizing transfer duration and transmit power. By solving the optimization problem, we derive an energy-efficient resource allocation scheme. Numerical results validate the effectiveness of the proposed scheme.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
An Effective Multi-node Charging Scheme for Wireless Rechargeable Sensor Networks With the maturation of wireless charging technology, Wireless Rechargeable Sensor Networks (WRSNs) has become a promising solution for prolong network lifetimes. Recently studies propose to employ a mobile charger (MC) to simultaneously charge multiple sensors within the same charging range, such that the charging performance can be improved. In this paper, we aim to jointly optimize the number of dead sensors and the energy usage effectiveness in such multi-node charging scenarios. We achieve this by introducing the partial charging mechanism, meaning that instead of following the conventional way that each sensor gets fully charged in one time step, our work allows MC to fully charge a sensor by multiple times. We show that the partial charging mechanism causes minimizing the number of dead sensors and maximizing the energy usage effectiveness to conflict with each other. We formulate this problem and develop a multi-node temporal spatial partial-charging algorithm (MTSPC) to solve it. The optimality of MTSPC is proved, and extensive simulations are carried out to demonstrate the effectiveness of MTSPC.
Self-sustainable Sensor Networks with Multi-source Energy Harvesting and Wireless Charging Energy supply remains to be a major bottleneck in Wireless Sensor Networks (WSNs). A self-sustainable network operates without battery replacement. Recent efforts employ multi-source energy harvesting to power sensors with ambient energy. Meanwhile, wireless charging is considered in WSNs as a reliable energy source. It motivates us to integrate both fields of research to build a self-sustainable network and guarantee operation under any weather condition. We propose a three-step solution to optimize this new framework. We first solve the Sensor Composition Problem (SCP) to derive the percentage of different types of sensors. Then we enable self-sustainability by bringing energy harvesting storage to the field for charging the Mobile Charger (MC). Next, we propose a 3-factor approximation algorithm to schedule sensor charging and energy replenishment of MC. Our extensive simulation results demonstrate significant improvement of network lifetime and reduction of network cost. The network lifetime can be extended at least three times compared with traditional approaches and the charging capability of MC increases at least 100%.
RFID-based techniques for human-activity detection The iBracelet and the Wireless Identification and Sensing Platform promise the ability to infer human activity directly from sensor readings.
Surviving wireless energy interference in RF-harvesting sensor networks: An empirical study Energy transfer through radio frequency (RF) waves enables battery-free operation for wireless sensor networks, while adversely impacting data communication. Thus, extending the lifetime for RF powered sensors comes at a cost of interference and reduced data throughput. This paper undertakes a systematic experimental study for both indoor and outdoor environments to quantify these tradeoffs. We demonstrate how separating the energy and data transfer frequencies gives rise to black (high loss), gray (moderate loss), and white (low loss) regions with respect to packet errors. We also measure the effect of the physical location of energy transmitters (ETs) and the impact of the spatial distribution of received interference power from the ETs, when these high power transmitters also charge the network. Our findings suggests leveraging the level of energy interference detected at the sensor node as a guiding metric to decide how best to separate the charging and communication functions in the frequency domain, as well as separating multiple ETs with slightly different center frequencies.
Approximation Algorithms for the Team Orienteering Problem In this paper we study a team orienteering problem, which is to find service paths for multiple vehicles in a network such that the profit sum of serving the nodes in the paths is maximized, subject to the cost budget of each vehicle. This problem has many potential applications in IoT and smart cities, such as dispatching energy-constrained mobile chargers to charge as many energy-critical sensors as possible to prolong the network lifetime. In this paper, we first formulate the team orienteering problem, where different vehicles are different types, each node can be served by multiple vehicles, and the profit of serving the node is a submodular function of the number of vehicles serving it. We then propose a novel $\left( {1 - {{(1/e)}^{\frac{1}{{2 + \varepsilon }}}}} \right)$ approximation algorithm for the problem, where ϵ is a given constant with 0 < ϵ≤ 1 and e is the base of the natural logarithm. In particular, the approximation ratio is no less than 0.32 when ϵ= 0.5. In addition, for a special team orienteering problem with the same type of vehicles and the profits of serving a node once and multiple times being the same, we devise an improved approximation algorithm. Finally, we evaluate the proposed algorithms with simulation experiments, and the results of which are very promising. Precisely, the profit sums delivered by the proposed algorithms are approximately 12.5% to 17.5% higher than those by existing algorithms.
Robust Scheduling for Wireless Charger Networks In this paper, we deal with the problem of Robust schedUling for wireLess charger nEtworks (RULE),i.e., given a number of rechargeable devices, each of which may drift within a certain range, and a number of directional chargers with fixed positions and adjustable orientations distributed on a 2D plane, determining the orientations of the wireless chargers to maximize the overall expected charging utility while taking the charging power jittering into consideration. To address the problem, we first model the charging power as a random variable, and apply area discretization technique to divide the charging area into several subareas to approximate the charging power as the same random variable in each subarea and bound the approximation error. Then, we discretize the orientations of chargers to deal with the unlimited searching space of orientations with performance bound. Finally, by proving the submodularity of the problem after the above transformations, we propose an algorithm that achieves $\left(\displaystyle \frac{1}{2}\ -\ \epsilon\right)$-approximation ratio. We conduct both simulation and field experiments, and the results show that our algorithm can perform better than other comparison algorithms by 103.25% on average.
Maximizing Sensor Lifetime with the Minimal Service Cost of a Mobile Charger in Wireless Sensor Networks. Wireless energy transfer technology based on magnetic resonant coupling has emerged as a promising technology for wireless sensor networks, by providing controllable yet continual energy to sensors. In this paper, we study the use of a mobile charger to wirelessly charge sensors in a rechargeable sensor network so that the sum of sensor lifetimes is maximized while the travel distance of the mobil...
P2S: A Primary and Passer-By Scheduling Algorithm for On-Demand Charging Architecture in Wireless Rechargeable Sensor Networks. As the interdiscipline of wireless communication and control engineering, the cooperative charging issue in wireless rechargeable sensor networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles to sensors, providing a new paradigm to prolong the network lifetime. However, existing techn...
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Cooperative Cleaners: A Study in Ant Robotics In the world of living creatures, simple-minded animals often cooperate to achieve common goals with amazing performance. One can consider this idea in the context of robotics, and suggest models for programming goal-oriented behavior into the members of a group of simple robots lacking global supervision. This can be done by controlling the local interactions between the robot agents, to have them jointly carry out a given mission. As a test case we analyze the problem of many simple robots cooperating to clean the dirty floor of a non-convex region in Z2, using the dirt on the floor as the main means of inter-robot communication.
A dual neural network for redundancy resolution of kinematically redundant manipulators subject to joint limits and joint velocity limits. In this paper, a recurrent neural network called the dual neural network is proposed for online redundancy resolution of kinematically redundant manipulators. Physical constraints such as joint limits and joint velocity limits, together with the drift-free criterion as a secondary task, are incorporated into the problem formulation of redundancy resolution. Compared to other recurrent neural networks, the dual neural network is piecewise linear and has much simpler architecture with only one layer of neurons. The dual neural network is shown to be globally (exponentially) convergent to optimal solutions. The dual neural network is simulated to control the PA10 robot manipulator with effectiveness demonstrated.
Let me tell you! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance Recent proposals for how robots should talk to people when they give advice suggest that the same strategies humans employ with other humans are effective for robots as well. However, the evidence is exclusively based on people's observation of robot giving advice to other humans. Hence, it is not clear whether the results still apply when people actually participate in real interactions with robots. We address this shortcoming in a novel systematic mixed-methods study where we employ both survey-based subjective and brain-based objective measures (using functional near infrared spectroscopy). The results show that previous results from observation conditions do not transfer automatically to interaction conditions, and that robot appearance and interaction distance are important modulators of human perceptions of robot behavior in advice-giving contexts.
Survey of Fog Computing: Fundamental, Network Applications, and Research Challenges. Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog n...
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.0525
0.05
0.05
0.05
0.05
0.025
0.015625
0.000312
0
0
0
0
0
0
Evaluating Docker for Lightweight Virtualization of Distributed and Time-Sensitive Applications in Industrial Automation A trend, accompanying the change of automation systems and their architectures, is the virtualization of software components. Virtualization strengthens platform-independent development and the provision of secure and isolated applications. Virtualization introduces well-defined interfaces to strengthen modularity, which facilitates the scalability of applications. However, virtualization includes additional software components and layers and, thus, additional computing costs. This additional effort can conflict with the real-time requirements of automation processes. Current research lacks the investigation of the time behavior of container-based virtualizations concerning their use in real-time systems. An assessment concerning real-time applications is required to prepare it for use in industrial automation. This article examines the effects of virtualization on the time delays of a software component based on Docker containers by providing measurements on a hardware testbed in a realistic use case. The experiments indicate that Docker virtualization can meet soft real-time requirements and can be used in industrial automation.
Infrastructure as a Service and Cloud Technologies To choose the most appropriate cloud-computing model for your organization, you must analyze your IT infrastructure, usage, and needs. To help with this, this article describes cloud computing's current status.
Dynamic Management of Virtual Infrastructures Cloud infrastructures are becoming an appropriate solution to address the computational needs of scientific applications. However, the use of public or on-premises Infrastructure as a Service (IaaS) clouds requires users to have non-trivial system administration skills. Resource provisioning systems provide facilities to choose the most suitable Virtual Machine Images (VMI) and basic configuration of multiple instances and subnetworks. Other tasks such as the configuration of cluster services, computational frameworks or specific applications are not trivial on the cloud, and normally users have to manually select the VMI that best fits, including undesired additional services and software packages. This paper presents a set of components that ease the access and the usability of IaaS clouds by automating the VMI selection, deployment, configuration, software installation, monitoring and update of Virtual Appliances. It supports APIs from a large number of virtual platforms, making user applications cloud-agnostic. In addition it integrates a contextualization system to enable the installation and configuration of all the user required applications providing the user with a fully functional infrastructure. Therefore, golden VMIs and configuration recipes can be easily reused across different deployments. Moreover, the contextualization agent included in the framework supports horizontal (increase/decrease the number of resources) and vertical (increase/decrease resources within a running Virtual Machine) by properly reconfiguring the software installed, considering the configuration of the multiple resources running. This paves the way for automatic virtual infrastructure deployment, customization and elastic modification at runtime for IaaS clouds.
Virtual machine placement quality estimation in cloud infrastructures using integer linear programming This paper is devoted to the quality estimation of virtual machine (VM) placement in cloud infrastructures, i.e., to choose the best hosts for a given set of VMs. We focus on test generation and monitoring techniques for comparing the placement result of a given implementation with an optimal solution with respect to given criteria. We show how Integer Linear Programming problems can be formulated and utilized for deriving test suites and optimal solutions to provide verdicts concerning the quality of VM placement implementations; the quality is calculated as the distance from an optimal placement for a given criterion (or a set of criteria). The presented approach is generic and showcased on resource utilization, energy consumption, and resource over-commitment cost. Experiments performed with different VM placement algorithms (including the VM placement algorithms implemented in widely used platforms, such as OpenStack) exhibit the competence of such algorithms with respect to different criteria.
Integrating security and privacy in software development As a consequence to factors such as progress made by the attackers, release of new technologies and use of increasingly complex systems, and threats to applications security have been continuously evolving. Security of code and privacy of data must be implemented in both design and programming practice to face such scenarios. In such a context, this paper proposes a software development approach, Privacy Oriented Software Development (POSD), that complements traditional development processes by integrating the activities needed for addressing security and privacy management in software systems. The approach is based on 5 key elements (Privacy by Design, Privacy Design Strategies, Privacy Pattern, Vulnerabilities, Context). The approach can be applied in two directions forward and backward, for developing new software systems or re-engineering an existing one. This paper presents the POSD approach in the backward mode together with an application in the context of an industrial project. Results show that POSD is able to discover software vulnerabilities, identify the remediation patterns needed for addressing them in the source code, and design the target architecture to be used for guiding privacy-oriented system re-engineering.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Vision meets robotics: The KITTI dataset We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Adapting visual category models to new domains Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Adaptive Consensus Control for a Class of Nonlinear Multiagent Time-Delay Systems Using Neural Networks Because of the complicity of consensus control of nonlinear multiagent systems in state time-delay, most of previous works focused only on linear systems with input time-delay. An adaptive neural network (NN) consensus control method for a class of nonlinear multiagent systems with state time-delay is proposed in this paper. The approximation property of radial basis function neural networks (RBFNNs) is used to neutralize the uncertain nonlinear dynamics in agents. An appropriate Lyapunov-Krasovskii functional, which is obtained from the derivative of an appropriate Lyapunov function, is used to compensate the uncertainties of unknown time delays. It is proved that our proposed approach guarantees the convergence on the basis of Lyapunov stability theory. The simulation results of a nonlinear multiagent time-delay system and a multiple collaborative manipulators system show the effectiveness of the proposed consensus control algorithm.
5G Virtualized Multi-access Edge Computing Platform for IoT Applications. The next generation of fifth generation (5G) network, which is implemented using Virtualized Multi-access Edge Computing (vMEC), Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, is a flexible and resilient network that supports various Internet of Things (IoT) devices. While NFV provides flexibility by allowing network functions to be dynamically deployed and inter-connected, vMEC provides intelligence at the edge of the mobile network reduces latency and increases the available capacity. With the diverse development of networking applications, the proposed vMEC use of Container-based Virtualization Technology (CVT) as gateway with IoT devices for flow control mechanism in scheduling and analysis methods will effectively increase the application Quality of Service (QoS). In this work, the proposed IoT gateway is analyzed. The combined effect of simultaneously deploying Virtual Network Functions (VNFs) and vMEC applications on a single network infrastructure, and critically in effecting exhibits low latency, high bandwidth and agility that will be able to connect large scale of devices. The proposed platform efficiently exploiting resources from edge computing and cloud computing, and takes IoT applications that adapt to network conditions to degrade an average 30% of end to end network latency.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Learning deep features for source color laser printer identification based on cascaded learning. Color laser printers have fast printing speed and high resolution, and forgeries using color laser printers can cause significant harm to society. A source printer identification technique can be employed as a countermeasure to those forgeries. This paper presents a color laser printer identification method based on cascaded learning of deep neural networks. First, the refiner network is trained by adversarial training to refine the synthetic dataset for halftone color decomposition. Then, the halftone color decomposing ConvNet is trained with the refined dataset. The trained knowledge about halftone color decomposition is transferred to the printer identifying ConvNet to enhance the identification accuracy. Training of the printer identifying ConvNet is carried out with real halftone images printed from candidate source printers. The robustness about rotation and scaling is considered in training process, which is not considered in existing methods. Experiments are performed on eight color laser printers, and the performance is compared with several existing methods. The experimental results clearly show that the proposed method outperforms existing source color laser printer identification methods.
The redundant discrete wavelet transform and additive noise The behavior under additive noise of the redundant discrete wavelet transform (RDWT), which is a frame expansion that is essentially an undecimated discrete wavelet transform, is studied. Known prior results in the form of inequalities bound distortion energy in the original signal domain from additive noise in frame-expansion coefficients. In this letter, a precise relationship between RDWT-domai...
Deep learning for source camera identification on mobile devices. •The design of an efficient CNN architecture for the SCI problem on mobile devices.•The evaluation of different CNN configurations.•The usage of a unique dataset (MICHE-I) of images taken from several mobile devices.•A 98.1% of accuracy on model detection.•A 91.1% of accuracy on sensor detection.
Smooth filtering identification based on convolutional neural networks The increasing prevalence of digital technology brings great convenience to human life, while also shows us the problems and challenges. Relying on easy-to-use image editing tools, some malicious manipulations, such as image forgery, have already threatened the authenticity of information, especially the electronic evidence in the crimes. As a result, digital forensics attracts more and more attention of researchers. Since some general post-operations, like widely used smooth filtering, can affect the reliability of forensic methods in various ways, it is also significant to detect them. Furthermore, the determination of detailed filtering parameters assists to recover the tampering history of an image. To deal with this problem, we propose a new approach based on convolutional neural networks (CNNs). Through adding a transform layer, obtained distinguishable frequency-domain features are put into a conventional CNN model, to identify the template parameters of various types of spatial smooth filtering operations, such as average, Gaussian and median filtering. Experimental results on a composite database show that putting the images directly into the conventional CNN model without transformation can not work well, and our method achieves better performance than some other applicable related methods, especially in the scenarios of small size and JPEG compression.
Real-time detecting one specific tampering operation in multiple operator chains Currently, many forensic techniques have been developed to determine the processing history of given multimedia contents. However, because of the interaction among tampering operations, there are still fundamental limits on the determination of tampering order and type. Up to now, a few works consider the cases where multiple operation types are involved in. In these cases, we not only need to consider the interplay of operation order, but also should quantify the detectability of one specific operation. In this paper, we propose an efficient information theoretical framework to solve this problem. Specially, we analyze the operation detection problem from the perspective of set partitioning and detection theory. Then, under certain detectors, we present the information framework to contrast the detected hypotheses and true hypotheses. Some constraint criterions are designed to improve the detection performance of an operation. In addition, Maximum-Likelihood Estimation (MLE) is used to obtain the best detector. Finally, a multiple chain set is examined in this paper, where three efficient detection methods have been proposed and the effectiveness of our framework has been demonstrated by simulations.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
A new approach for dynamic fuzzy logic parameter tuning in Ant Colony Optimization and its application in fuzzy control of a mobile robot Central idea is to avoid or slow down full convergence through the dynamic variation of parameters.Performance of different ACO variants was observed to choose one as the basis to the proposed approach.Convergence fuzzy controller with the objective of maintaining diversity to avoid premature convergence was created. Ant Colony Optimization is a population-based meta-heuristic that exploits a form of past performance memory that is inspired by the foraging behavior of real ants. The behavior of the Ant Colony Optimization algorithm is highly dependent on the values defined for its parameters. Adaptation and parameter control are recurring themes in the field of bio-inspired optimization algorithms. The present paper explores a new fuzzy approach for diversity control in Ant Colony Optimization. The main idea is to avoid or slow down full convergence through the dynamic variation of a particular parameter. The performance of different variants of the Ant Colony Optimization algorithm is analyzed to choose one as the basis to the proposed approach. A convergence fuzzy logic controller with the objective of maintaining diversity at some level to avoid premature convergence is created. Encouraging results on several traveling salesman problem instances and its application to the design of fuzzy controllers, in particular the optimization of membership functions for a unicycle mobile robot trajectory control are presented with the proposed method.
Online Palmprint Identification Biometrics-based personal identification is regarded as an effective method for automatically recognizing, with a high confidence, a person's identity. This paper presents a new biometric approach to online personal identification using palmprint technology. In contrast to the existing methods, our online palmprint identification system employs low-resolution palmprint images to achieve effective personal identification. The system consists of two parts: a novel device for online palmprint image acquisition and an efficient algorithm for fast palmprint recognition. A robust image coordinate system is defined to facilitate image alignment for feature extraction. In addition, a 2D Gabor phase encoding scheme is proposed for palmprint feature extraction and representation. The experimental results demonstrate the feasibility of the proposed system.
Theory of Mind for a Humanoid Robot If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a “theory of mind.” This paper presents the theories of Leslie (1994) and Baron-Cohen (1995) on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.
Gravity-Balancing Leg Orthosis and Its Performance Evaluation In this paper, we propose a device to assist persons with hemiparesis to walk by reducing or eliminating the effects of gravity. The design of the device includes the following features: 1) it is passive, i.e., it does not include motors or actuators, but is only composed of links and springs; 2) it is safe and has a simple patient-machine interface to accommodate variability in geometry and inertia of the subjects. A number of methods have been proposed in the literature to gravity-balance a machine. Here, we use a hybrid method to achieve gravity balancing of a human leg over its range of motion. In the hybrid method, a mechanism is used to first locate the center of mass of the human limb and the orthosis. Springs are then added so that the system is gravity-balanced in every configuration. For a quantitative evaluation of the performance of the device, electromyographic (EMG) data of the key muscles, involved in the motion of the leg, were collected and analyzed. Further experiments involving leg-raising and walking tasks were performed, where data from encoders and force-torque sensors were used to compute joint torques. These experiments were performed on five healthy subjects and a stroke patient. The results showed that the EMG activity from the rectus femoris and hamstring muscles with the device was reduced by 75%, during static hip and knee flexion, respectively. For leg-raising tasks, the average torque for static positioning was reduced by 66.8% at the hip joint and 47.3% at the knee joint; however, if we include the transient portion of the leg-raising task, the average torque at the hip was reduced by 61.3%, and at the knee was increased by 2.7% at the knee joints. In the walking experiment, there was a positive impact on the range of movement at the hip and knee joints, especially for the stroke patient: the range of movement increased by 45% at the hip joint and by 85% at the knee joint. We believe that this orthosis can be potentially used to desig- - n rehabilitation protocols for patients with stroke
Biologically-inspired soft exosuit. In this paper, we present the design and evaluation of a novel soft cable-driven exosuit that can apply forces to the body to assist walking. Unlike traditional exoskeletons which contain rigid framing elements, the soft exosuit is worn like clothing, yet can generate moments at the ankle and hip with magnitudes of 18% and 30% of those naturally generated by the body during walking, respectively. Our design uses geared motors to pull on Bowden cables connected to the suit near the ankle. The suit has the advantages over a traditional exoskeleton in that the wearer's joints are unconstrained by external rigid structures, and the worn part of the suit is extremely light, which minimizes the suit's unintentional interference with the body's natural biomechanics. However, a soft suit presents challenges related to actuation force transfer and control, since the body is compliant and cannot support large pressures comfortably. We discuss the design of the suit and actuation system, including principles by which soft suits can transfer force to the body effectively and the biological inspiration for the design. For a soft exosuit, an important design parameter is the combined effective stiffness of the suit and its interface to the wearer. We characterize the exosuit's effective stiffness, and present preliminary results from it generating assistive torques to a subject during walking. We envision such an exosuit having broad applicability for assisting healthy individuals as well as those with muscle weakness.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Development of a sEMG-based torque estimation control strategy for a soft elbow exoskeleton. Motor dysfunction has become a serious threat to the health of older people and the patients with neuromuscular impairment. The application of exoskeleton to motion assistance has received increasing attention due to its promising prospects. The major contribution of this paper is to develop a joint torque estimation control strategy for a soft elbow exoskeleton to provide effective power assistance. The surface electromyography signal (sEMG) from biceps is utilized to estimate the motion intension of wearer and map into the real-time elbow joint torque. Moreover, the control strategy fusing the estimated joint torque, estimated joint angle from inertial measurement unit and encoder feedback signal is proposed to improve motion assistance performance. Finally, further experimental investigations are carried out to compare the control effectiveness of the proposed intention-based control strategy to that of the proportional control strategy. The experimental results indicate that the proposed control strategy provides better performance in elbow assistance with different loads, and the average efficiency of assistance with heavy load is about 42.66%.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SPAN: Spatial Pyramid Attention Network for Image Manipulation Localization We present a novel framework, Spatial Pyramid Attention Network (SPAN) for detection and localization of multiple types of image manipulations. The proposed architecture efficiently and effectively models the relationship between image patches at multiple scales by constructing a pyramid of local self-attention blocks. The design includes a novel position projection to encode the spatial positions of the patches. SPAN is trained on a generic, synthetic dataset but can also be fine tuned for specific datasets; The proposed method shows significant gains in performance on standard datasets over previous state-of-the-art methods.
A deep learning approach to patch-based image inpainting forensics. Although image inpainting is now an effective image editing technique, limited work has been done for inpainting forensics. The main drawbacks of the conventional inpainting forensics methods lie in the difficulties on inpainting feature extraction and the very high computational cost. In this paper, we propose a novel approach based on a convolutional neural network (CNN) to detect patch-based inpainting operation. Specifically, the CNN is built following the encoder–decoder network structure, which allows us to predict the inpainting probability for each pixel in an image. To guide the CNN to automatically learn the inpainting features, a label matrix is generated for the CNN training by assigning a class label for each pixel of an image, and the designed weighted cross-entropy serves as the loss function. They further help to strongly supervise the CNN to capture the manipulation information rather than the image content features. By the established CNN, inpainting forensics does not need to consider feature extraction and classifier design, and use any postprocessing as in conventional forensics methods. They are combined into the unique framework and optimized simultaneously. Experimental results show that the proposed method achieves superior performance in terms of true positive rate, false positive rate and the running time, as compared with state-of-the-art methods for inpainting forensics, and is very robust against JPEG compression and scaling manipulations.
Deep Matching and Validation Network - An End-to-End Solution to Constrained Image Splicing Localization and Detection. Image splicing is a very common image manipulation technique that is sometimes used for malicious purposes. A splicing detection and localization algorithm usually takes an input image and produces a binary decision indicating whether the input image has been manipulated, and also a segmentation mask that corresponds to the spliced region. Most existing splicing detection and localization pipelines suffer from two main shortcomings: 1) they use handcrafted features that are not robust against subsequent processing (e.g., compression), and 2) each stage of the pipeline is usually optimized independently. In this paper we extend the formulation of the underlying splicing problem to consider two input images, a query image and a potential donor image. Here the task is to estimate the probability that the donor image has been used to splice the query image, and obtain the splicing masks for both the query and donor images. We introduce a novel deep convolutional neural network architecture, called Deep Matching and Validation Network (DMVN), which simultaneously localizes and detects image splicing. The proposed approach does not depend on handcrafted features and uses raw input images to create deep learned representations. Furthermore, the DMVN is end-to-end optimized to produce the probability estimates and the segmentation masks. Our extensive experiments demonstrate that this approach outperforms state-of-the-art splicing detection methods by a large margin in terms of both AUC score and speed.
View-Based Discriminative Probabilistic Modeling for 3D Object Retrieval and Recognition In view-based 3D object retrieval and recognition, each object is described by multiple views. A central problem is how to estimate the distance between two objects. Most conventional methods integrate the distances of view pairs across two objects as an estimation of their distance. In this paper, we propose a discriminative probabilistic object modeling approach. It builds probabilistic models for each object based on the distribution of its views, and the distance between two objects is defined as the upper bound of the Kullback–Leibler divergence of the corresponding probabilistic models. 3D object retrieval and recognition is accomplished based on the distance measures. We first learn models for each object by the adaptation from a set of global models with a maximum likelihood principle. A further adaption step is then performed to enhance the discriminative ability of the models. We conduct experiments on the ETH 3D object dataset, the National Taiwan University 3D model dataset, and the Princeton Shape Benchmark. We compare our approach with different methods, and experimental results demonstrate the superiority of our approach.
DADNet: Dilated-Attention-Deformable ConvNet for Crowd Counting Most existing CNN-based methods for crowd counting always suffer from large scale variation in objects of interest, leading to density maps of low quality. In this paper, we propose a novel deep model called Dilated-Attention-Deformable ConvNet (DADNet), which consists of two schemes: multi-scale dilated attention and deformable convolutional DME (Density Map Estimation). The proposed model explores a scale-aware attention fusion with various dilation rates to capture different visual granularities of crowd regions of interest, and utilizes deformable convolutions to generate a high-quality density map. There are two merits as follows: (1) varying dilation rates can effectively identify discriminative regions by enlarging the receptive fields of convolutional kernels upon surrounding region cues, and (2) deformable CNN operations promote the accuracy of object localization in the density map by augmenting the spatial object location sampling with adaptive offsets and scalars. DADNet not only excels at capturing rich spatial context of salient and tiny regions of interest simultaneously, but also keeps a robustness to background noises, such as partially occluded objects. Extensive experiments on benchmark datasets verify that DADNet achieves the state-of-the-art performance. Visualization results of the multi-scale attention maps further validate the remarkable interpretability achieved by our solution.
MSTA-Net: Forgery Detection by Generating Manipulation Trace Based on Multi-Scale Self-Texture Attention Lots of Deepfake videos are circulating on the Internet, which not only damages the personal rights of the forged individual, but also pollutes the web environment. What’s worse, it may trigger public opinion and endanger national security. Therefore, it is urgent to fight deep forgery. Most of the current forgery detection algorithms are based on convolutional neural networks to learn the feature differences between forged and real frames from big data. In this paper, from the perspective of image generation, we simulate the forgery process based on image generation and explore possible trace of forgery. We propose a multi-scale self-texture attention Generative Network(MSTA-Net) to track the potential texture trace in image generation process and eliminate the interference of deep forgery post-processing. Firstly, a generator with encoder-decoder is to disassemble images and performed trace generation, then we merge the generated trace image and the original map, which is input into the classifier with Resnet as the backbone. Secondly, the self-texture attention mechanism(STA) is proposed as the skip connection between the encoder and the decoder, which significantly enhances the texture characteristics in the image disassembly process and assists the generation of texture trace. Finally, we propose a loss function called Prob-tuple loss restricted by classification probability to amend the generation of forgery trace directly. To verify the performance of the MSTA-Net, we design different experiments to verify the feasibility and advancement of the method. Experimental results show that the proposed method performs well on deep forged databases represented by FaceForensics++, Celeb-DF, Deeperforensics and DFDC, and some results are reaching the state-of-the-art.
RRU-Net: The Ringed Residual U-Net for Image Splicing Forgery Detection Detecting a splicing forgery image and then locating the forgery regions is a challenging task. Some traditional feature extraction methods and convolutional neural network (CNN)-based detection methods have been proposed to finish this task by exploring the differences of image attributes between the un-tampered and tampered regions in a image. However, the performance of the existing detection methods is unsatisfactory. In this paper, we propose a ringed residual U-Net (RRU-Net) for image splicing forgery detection. The proposed RRU-Net is an end-to-end image essence attribute segmentation network, which is independent of human visual system, it can accomplish the forgery detection without any preprocessing and post-processing. The core idea of the proposed RRU-Net is to strengthen the learning way of CNN, which is inspired by the recall and the consolidation mechanism of the human brain and implemented by the propagation and the feedback process of the residual in CNN. The residual propagation recalls the input feature information to solve the gradient degradation problem in the deeper network; the residual feedback consolidates the input feature information to make the differences of image attributes between the un-tampered and tampered regions be more obvious. Experimental results show that the proposed detection method can achieve a promising result compared with the state-of-the-art splicing forgery detection methods.
A survey on ear biometrics Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non-contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion, earprint forensics, ear symmetry, ear classification, and ear individuality. This article provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers.
A Private and Efficient Mechanism for Data Uploading in Smart Cyber-Physical Systems. To provide fine-grained access to different dimensions of the physical world, the data uploading in smart cyber-physical systems suffers novel challenges on both energy conservation and privacy preservation. It is always critical for participants to consume as little energy as possible for data uploading. However, simply pursuing energy efficiency may lead to extreme disclosure of private informat...
Interpolating view and scene motion by dynamic view morphing We introduce the problem of view interpolation for dynamic scenes. Our solution to this problem extends the concept of view morphing and retains the practical advantages of that method. We are specifically concerned with interpolating between two reference views captured at different times, so that there is a missing interval of time between when the views were taken. The synthetic interpolations produced by our algorithm portray one possible physically-valid version of what transpired in the scene during the missing time. It is assumed that each object in the original scene underwent a series of rigid translations. Dynamic view morphing can work with widely-spaced reference views, sparse point correspondences, and uncalibrated cameras. When the camera-to-camera transformation can be determined, the synthetic interpolation will portray scene objects moving along straight-line, constant-velocity trajectories in world space
J-RoC: A Joint Routing and Charging scheme to prolong sensor network lifetime The emerging wireless charging technology creates a controllable and perpetual energy source to provide wireless power over distance. Schemes have been proposed to make use of wireless charging to prolong the sensor network lifetime. Unfortunately, existing schemes only passively replenish sensors that are deficient in energy supply, and cannot fully leverage the strengths of this technology. To address the limitation, we propose J-RoC - a practical and efficient Joint Routing and Charging scheme. Through proactively guiding the routing activities in the network and delivering energy to where it is needed, J-RoC not only replenishes energy into the network but also effectively improves the network energy utilization, thus prolonging the network lifetime. To evaluate the performance of the J-RoC scheme, we conduct experiments in a small-scale testbed and simulations in large-scale networks. Evaluation results demonstrate that J-RoC significantly elongates the network lifetime compared to existing wireless charging based schemes.
A recent survey of reversible watermarking techniques. The art of secretly hiding and communicating information has gained immense importance in the last two decades due to the advances in generation, storage, and communication technology of digital content. Watermarking is one of the promising solutions for tamper detection and protection of digital content. However, watermarking can cause damage to the sensitive information present in the cover work. Therefore, at the receiving end, the exact recovery of cover work may not be possible. Additionally, there exist certain applications that may not tolerate even small distortions in cover work prior to the downstream processing. In such applications, reversible watermarking instead of conventional watermarking is employed. Reversible watermarking of digital content allows full extraction of the watermark along with the complete restoration of the cover work. For the last few years, reversible watermarking techniques are gaining popularity because of its increasing applications in some important and sensitive areas, i.e., military communication, healthcare, and law-enforcement. Due to the rapid evolution of reversible watermarking techniques, a latest review of recent research in this field is highly desirable. In this survey, the performances of different reversible watermarking schemes are discussed on the basis of various characteristics of watermarking. However, the major focus of this survey is on prediction-error expansion based reversible watermarking techniques, whereby the secret information is hidden in the prediction domain through error expansion. Comparison of the different reversible watermarking techniques is provided in tabular form, and an analysis is carried out. Additionally, experimental comparison of some of the recent reversible watermarking techniques, both in terms of watermarking properties and computational time, is provided on a dataset of 300 images. Future directions are also provided for this potentially important field of watermarking.
The ApolloScape Dataset for Autonomous Driving Scene parsing aims to assign a class (semantic) label for each pixel in an image. It is a comprehensive analysis of an image. Given the rise of autonomous driving, pixel-accurate environmental perception is expected to be a key enabling technical piece. However, providing a large scale dataset for the design and evaluation of scene parsing algorithms, in particular for outdoor scenes, has been difficult. The per-pixel labelling process is prohibitively expensive, limiting the scale of existing ones. In this paper, we present a large-scale open dataset, ApolloScape, that consists of RGB videos and corresponding dense 3D point clouds. Comparing with existing datasets, our dataset has the following unique properties. The first is its scale, our initial release contains over 140K images - each with its per-pixel semantic mask, up to 1M is scheduled. The second is its complexity. Captured in various traffic conditions, the number of moving objects averages from tens to over one hundred (Figure 1). And the third is the 3D attribute, each image is tagged with high-accuracy pose information at cm accuracy and the static background point cloud has mm relative accuracy. We are able to label these many images by an interactive and efficient labelling pipeline that utilizes the high-quality 3D point cloud. Moreover, our dataset also contains different lane markings based on the lane colors and styles. We expect our new dataset can deeply benefit various autonomous driving related applications that include but not limited to 2D/3D scene understanding, localization, transfer learning, and driving simulation.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.
Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay. In this paper, a weighting-delay-based method is developed for the study of the stability problem of a class of recurrent neural networks (RNNs) with time-varying delay. Different from previous results, the delay interval [0, d(t)] is divided into some variable subintervals by employing weighting delays. Thus, new delay-dependent stability criteria for RNNs with time-varying delay are derived by applying this weighting-delay method, which are less conservative than previous results. The proposed stability criteria depend on the positions of weighting delays in the interval [0, d(t)] , which can be denoted by the weighting-delay parameters. Different weighting-delay parameters lead to different stability margins for a given system. Thus, a solution based on optimization methods is further given to calculate the optimal weighting-delay parameters. Several examples are provided to verify the effectiveness of the proposed criteria.
Optimal control using adaptive resonance theory and Q-learning. Motivated by recent advancement in neurocognitive in brain modeling research, a multiple model-based Q-learning structure is proposed for optimal tracking control problem of time-varying discrete-time systems. This is achieved by utilizing a multiple-model scheme combined with adaptive resonance theory (ART). The ART algorithm generates sub-models based on the match-based clustering method. A responsibility signal governs the likelihood of contribution of each sub-model to the Q-function. The Q-function is learned using the batch least-square algorithm. Simulation results are added to show the performance and the effectiveness of the overall proposed control method.
Generalized Actor-Critic Learning Optimal Control in Smart Home Energy Management This article is concerned with a new generalized actor-critic learning (GACL) optimal control method. It aims at the optimal energy control and management for smart home systems, which is expected to minimize the consumption cost for home users. In the present GACL optimal control method, it is the first time that three iteration processes, which are global iteration, local iteration, and interior...
Adaptive Learning in Tracking Control Based on the Dual Critic Network Design. In this paper, we present a new adaptive dynamic programming approach by integrating a reference network that provides an internal goal representation to help the systems learning and optimization. Specifically, we build the reference network on top of the critic network to form a dual critic network design that contains the detailed internal goal representation to help approximate the value funct...
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Latent dirichlet allocation We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
GROPING: Geomagnetism and cROwdsensing Powered Indoor NaviGation Although a large number of WiFi fingerprinting based indoor localization systems have been proposed, our field experience with Google Maps Indoor (GMI), the only system available for public testing, shows that it is far from mature for indoor navigation. In this paper, we first report our field studies with GMI, as well as experiment results aiming to explain our unsatisfactory GMI experience. Then motivated by the obtained insights, we propose GROPING as a self-contained indoor navigation system independent of any infrastructural support. GROPING relies on geomagnetic fingerprints that are far more stable than WiFi fingerprints, and it exploits crowdsensing to construct floor maps rather than expecting individual venues to supply digitized maps. Based on our experiments with 20 participants in various floors of a big shopping mall, GROPING is able to deliver a sufficient accuracy for localization and thus provides smooth navigation experience.
5G Virtualized Multi-access Edge Computing Platform for IoT Applications. The next generation of fifth generation (5G) network, which is implemented using Virtualized Multi-access Edge Computing (vMEC), Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, is a flexible and resilient network that supports various Internet of Things (IoT) devices. While NFV provides flexibility by allowing network functions to be dynamically deployed and inter-connected, vMEC provides intelligence at the edge of the mobile network reduces latency and increases the available capacity. With the diverse development of networking applications, the proposed vMEC use of Container-based Virtualization Technology (CVT) as gateway with IoT devices for flow control mechanism in scheduling and analysis methods will effectively increase the application Quality of Service (QoS). In this work, the proposed IoT gateway is analyzed. The combined effect of simultaneously deploying Virtual Network Functions (VNFs) and vMEC applications on a single network infrastructure, and critically in effecting exhibits low latency, high bandwidth and agility that will be able to connect large scale of devices. The proposed platform efficiently exploiting resources from edge computing and cloud computing, and takes IoT applications that adapt to network conditions to degrade an average 30% of end to end network latency.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Wireless energy harvesting for the Internet of Things The Internet of Things (IoT) is an emerging computing concept that describes a structure in which everyday physical objects, each provided with unique identifiers, are connected to the Internet without requiring human interaction. Long-term and self-sustainable operation are key components for realization of such a complex network, and entail energy-aware devices that are potentially capable of harvesting their required energy from ambient sources. Among different energy harvesting methods, such as vibration, light, and thermal energy extraction, wireless energy harvesting (WEH) has proven to be one of the most promising solutions by virtue of its simplicity, ease of implementation, and availability. In this article, we present an overview of enabling technologies for efficient WEH, analyze the lifetime of WEH-enabled IoT devices, and briefly study the future trends in the design of efficient WEH systems and research challenges that lie ahead.
Mobility in wireless sensor networks - Survey and proposal. Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time.
Tag-based cooperative data gathering and energy recharging in wide area RFID sensor networks The Wireless Identification and Sensing Platform (WISP) conjugates the identification potential of the RFID technology and the sensing and computing capability of the wireless sensors. Practical issues, such as the need of periodically recharging WISPs, challenge the effective deployment of large-scale RFID sensor networks (RSNs) consisting of RFID readers and WISP nodes. In this view, the paper proposes cooperative solutions to energize the WISP devices in a wide-area sensing network while reducing the data collection delay. The main novelty is the fact that both data transmissions and energy transfer are based on the RFID technology only: RFID mobile readers gather data from the WISP devices, wirelessly recharge them, and mutually cooperate to reduce the data delivery delay to the sink. Communication between mobile readers relies on two proposed solutions: a tag-based relay scheme, where RFID tags are exploited to temporarily store sensed data at pre-determined contact points between the readers; and a tag-based data channel scheme, where the WISPs are used as a virtual communication channel for real time data transfer between the readers. Both solutions require: (i) clustering the WISP nodes; (ii) dimensioning the number of required RFID mobile readers; (iii) planning the tour of the readers under the energy and time constraints of the nodes. A simulative analysis demonstrates the effectiveness of the proposed solutions when compared to non-cooperative approaches. Differently from classic schemes in the literature, the solutions proposed in this paper better cope with scalability issues, which is of utmost importance for wide area networks.
Improving charging capacity for wireless sensor networks by deploying one mobile vehicle with multiple removable chargers. Wireless energy transfer is a promising technology to prolong the lifetime of wireless sensor networks (WSNs), by employing charging vehicles to replenish energy to lifetime-critical sensors. Existing studies on sensor charging assumed that one or multiple charging vehicles being deployed. Such an assumption may have its limitation for a real sensor network. On one hand, it usually is insufficient to employ just one vehicle to charge many sensors in a large-scale sensor network due to the limited charging capacity of the vehicle or energy expirations of some sensors prior to the arrival of the charging vehicle. On the other hand, although the employment of multiple vehicles can significantly improve the charging capability, it is too costly in terms of the initial investment and maintenance costs on these vehicles. In this paper, we propose a novel charging model that a charging vehicle can carry multiple low-cost removable chargers and each charger is powered by a portable high-volume battery. When there are energy-critical sensors to be charged, the vehicle can carry the chargers to charge multiple sensors simultaneously, by placing one portable charger in the vicinity of one sensor. Under this novel charging model, we study the scheduling problem of the charging vehicle so that both the dead duration of sensors and the total travel distance of the mobile vehicle per tour are minimized. Since this problem is NP-hard, we instead propose a (3+ϵ)-approximation algorithm if the residual lifetime of each sensor can be ignored; otherwise, we devise a novel heuristic algorithm, where ϵ is a given constant with 0 < ϵ ≤ 1. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Experimental results show that the performance of the proposed algorithms are very promising.
Speed control of mobile chargers serving wireless rechargeable networks. Wireless rechargeable networks have attracted increasing research attention in recent years. For charging service, a mobile charger is often employed to move across the network and charge all network nodes. To reduce the charging completion time, most existing works have used the “move-then-charge” model where the charger first moves to specific spots and then starts charging nodes nearby. As a result, these works often aim to reduce the moving delay or charging delay at the spots. However, the charging opportunity on the move is largely overlooked because the charger can charge network nodes while moving, which as we analyze in this paper, has the potential to greatly reduce the charging completion time. The major challenge to exploit the charging opportunity is the setting of the moving speed of the charger. When the charger moves slow, the charging delay will be reduced (more energy will be charged during the movement) but the moving delay will increase. To deal with this challenge, we formulate the problem of delay minimization as a Traveling Salesman Problem with Speed Variations (TSP-SV) which jointly considers both charging and moving delay. We further solve the problem using linear programming to generate (1) the moving path of the charger, (2) the moving speed variations on the path and (3) the stay time at each charging spot. We also discuss possible ways to reduce the calculation complexity. Extensive simulation experiments are conducted to study the delay performance under various scenarios. The results demonstrate that our proposed method achieves much less completion time compared to the state-of-the-art work.
A Prediction-Based Charging Policy and Interference Mitigation Approach in the Wireless Powered Internet of Things The Internet of Things (IoT) technology has recently drawn more attention due to its ability to achieve the interconnections of massive physic devices. However, how to provide a reliable power supply to energy-constrained devices and improve the energy efficiency in the wireless powered IoT (WP-IoT) is a twofold challenge. In this paper, we develop a novel wireless power transmission (WPT) system, where an unmanned aerial vehicle (UAV) equipped with radio frequency energy transmitter charges the IoT devices. A machine learning framework of echo state networks together with an improved <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -means clustering algorithm is used to predict the energy consumption and cluster all the sensor nodes at the next period, thus automatically determining the charging strategy. The energy obtained from the UAV by WPT supports the IoT devices to communicate with each other. In order to improve the energy efficiency of the WP-IoT system, the interference mitigation problem is modeled as a mean field game, where an optimal power control policy is presented to adapt and analyze the large number of sensor nodes randomly deployed in WP-IoT. The numerical results verify that our proposed dynamic charging policy effectively reduces the data packet loss rate, and that the optimal power control policy greatly mitigates the interference, and improve the energy efficiency of the whole network.
Design of Self-sustainable Wireless Sensor Networks with Energy Harvesting and Wireless Charging AbstractEnergy provisioning plays a key role in the sustainable operations of Wireless Sensor Networks (WSNs). Recent efforts deploy multi-source energy harvesting sensors to utilize ambient energy. Meanwhile, wireless charging is a reliable energy source not affected by spatial-temporal ambient dynamics. This article integrates multiple energy provisioning strategies and adaptive adjustment to accomplish self-sustainability under complex weather conditions. We design and optimize a three-tier framework with the first two tiers focusing on the planning problems of sensors with various types and distributed energy storage powered by environmental energy. Then we schedule the Mobile Chargers (MC) between different charging activities and propose an efficient, 4-factor approximation algorithm. Finally, we adaptively adjust the algorithms to capture real-time energy profiles and jointly optimize those correlated modules. Our extensive simulations demonstrate significant improvement of network lifetime (\(\)), increase of harvested energy (15%), reduction of network cost (30%), and the charging capability of MC by 100%.
Energy Harvesting and Wireless Transfer in Sensor Network Applications: Concepts and Experiences. Advances in micro-electronics and miniaturized mechanical systems are redefining the scope and extent of the energy constraints found in battery-operated wireless sensor networks (WSNs). On one hand, ambient energy harvesting may prolong the systems’ lifetime or possibly enable perpetual operation. On the other hand, wireless energy transfer allows systems to decouple the energy sources from the sensing locations, enabling deployments previously unfeasible. As a result of applying these technologies to WSNs, the assumption of a finite energy budget is replaced with that of potentially infinite, yet intermittent, energy supply, profoundly impacting the design, implementation, and operation of WSNs. This article discusses these aspects by surveying paradigmatic examples of existing solutions in both fields and by reporting on real-world experiences found in the literature. The discussion is instrumental in providing a foundation for selecting the most appropriate energy harvesting or wireless transfer technology based on the application at hand. We conclude by outlining research directions originating from the fundamental change of perspective that energy harvesting and wireless transfer bring about.
NETWRAP: An NDN Based Real-TimeWireless Recharging Framework for Wireless Sensor Networks Using vehicles equipped with wireless energy transmission technology to recharge sensor nodes over the air is a game-changer for traditional wireless sensor networks. The recharging policy regarding when to recharge which sensor nodes critically impacts the network performance. So far only a few works have studied such recharging policy for the case of using a single vehicle. In this paper, we propose NETWRAP, an N DN based Real Time Wireless Rech arging Protocol for dynamic wireless recharging in sensor networks. The real-time recharging framework supports single or multiple mobile vehicles. Employing multiple mobile vehicles provides more scalability and robustness. To efficiently deliver sensor energy status information to vehicles in real-time, we leverage concepts and mechanisms from named data networking (NDN) and design energy monitoring and reporting protocols. We derive theoretical results on the energy neutral condition and the minimum number of mobile vehicles required for perpetual network operations. Then we study how to minimize the total traveling cost of vehicles while guaranteeing all the sensor nodes can be recharged before their batteries deplete. We formulate the recharge optimization problem into a Multiple Traveling Salesman Problem with Deadlines (m-TSP with Deadlines), which is NP-hard. To accommodate the dynamic nature of node energy conditions with low overhead, we present an algorithm that selects the node with the minimum weighted sum of traveling time and residual lifetime. Our scheme not only improves network scalability but also ensures the perpetual operation of networks. Extensive simulation results demonstrate the effectiveness and efficiency of the proposed design. The results also validate the correctness of the theoretical analysis and show significant improvements that cut the number of nonfunctional nodes by half compared to the static scheme while maintaining the network overhead at the same level.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
New inspirations in swarm intelligence: a survey The growing complexity of real-world problems has motivated computer scientists to search for efficient problem-solving methods. Evolutionary computation and swarm intelligence meta-heuristics are outstanding examples that nature has been an unending source of inspiration. The behaviour of bees, bacteria, glow-worms, fireflies, slime moulds, cockroaches, mosquitoes and other organisms have inspired swarm intelligence researchers to devise new optimisation algorithms. This tutorial highlights the most recent nature-based inspirations as metaphors for swarm intelligence meta-heuristics. We describe the biological behaviours from which a number of computational algorithms were developed. Also, the most recent and important applications and the main features of such meta-heuristics are reported.
Robust ear based authentication using Local Principal Independent Components This paper presents the ear based authentication using Local Principal Independent Components (LPIC) an extension of PCA. As PCA is a global approach dealing with all pixel intensities, it is difficult to get finer details from the ear image. The concept of information sets is introduced in this paper so as to have leverage over the local information. These sets are based on the granularization of the ear image in the form of windows. The features based on these sets allow us to change the local information which goes into LPIC as the input. Thus LPIC not only uses this local information but also helps to reduce the dimensions of the deduced features far less than that can be achieved with PCA. For the extraction of sparse information from ear, features such as Effective information (EI), Energy feature (EF), Sigmoid feature (SF), Multi Quadratic feature (MQD) are derived and then LPIC is applied to get the reduced number of features. Inner product classifier (IPC) is developed for the classification of these features. The experiments carried out on constrained and unconstrained databases show that LPIC is effective not only under the ideal conditions but also under the unconstrained environment.
Distributed Adaptive Fuzzy Containment Control of Stochastic Pure-Feedback Nonlinear Multiagent Systems With Local Quantized Controller and Tracking Constraint This paper studies the distributed adaptive fuzzy containment tracking control for a class of high-order stochastic pure-feedback nonlinear multiagent systems with multiple dynamic leaders and performance constraint requirement. The control inputs are quantized by hysteresis quantizers. Mean value theorems are used to transfer the nonaffine systems into affine forms and a nonlinear decomposition is employed to solve the quantized input control problem. With a novel structure barrier Lyapunov function, the distributed control strategy is developed. It is strictly proved that the outputs of the followers converge to the convex hull spanned by the multiple dynamic leaders, the containment tracking errors satisfy the performance constraint requirement and the resulting leader-following multiagent system is stable in probability based on Lyapunov stability theory. At last, simulation is provided to show the validity and the advantages of the proposed techniques.
Active Suspension Control of Quarter-Car System With Experimental Validation A reliable, efficient, and simple control is presented and validated for a quarter-car active suspension system equipped with an electro-hydraulic actuator. Unlike the existing techniques, this control does not use any function approximation, e.g., neural networks (NNs) or fuzzy-logic systems (FLSs), while the unmolded dynamics, including the hydraulic actuator behavior, can be accommodated effectively. Hence, the heavy computational costs and tedious parameter tuning phase can be remedied. Moreover, both the transient and steady-state suspension performance can be retained by incorporating prescribed performance functions (PPFs) into the control implementation. This guaranteed performance is particularly useful for guaranteeing the safe operation of suspension systems. Apart from theoretical studies, some practical considerations of control implementation and several parameter tuning guidelines are suggested. Experimental results based on a practical quarter-car active suspension test-rig demonstrate that this control can obtain a superior performance and has better computational efficiency over several other control methods.
1.105262
0.1
0.1
0.1
0.1
0.1
0.1
0.05
0.013958
0
0
0
0
0
Adaptive Neural Control of Uncertain MIMO Nonlinear Systems With State and Input Constraints. An adaptive neural control strategy for multiple input multiple output nonlinear systems with various constraints is presented in this paper. To deal with the nonsymmetric input nonlinearity and the constrained states, the proposed adaptive neural control is combined with the backstepping method, radial basis function neural network, barrier Lyapunov function (BLF), and disturbance observer. By en...
A vision-based formation control framework We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.
Distributed Control of Spatially Reversible Interconnected Systems with Boundary Conditions We present a class of spatially interconnected systems with boundary conditions that have close links with their spatially invariant extensions. In particular, well-posedness, stability, and performance of the extension imply the same characteristics for the actual, finite extent system. In turn, existing synthesis methods for control of spatially invariant systems can be extended to this class. The relation between the two kinds of systems is proved using ideas based on the "method of images" of partial differential equations theory and uses symmetry properties of the interconnection as a key tool.
Dynamic Learning From Neural Control for Strict-Feedback Systems With Guaranteed Predefined Performance. This paper focuses on dynamic learning from neural control for a class of nonlinear strict-feedback systems with predefined tracking performance attributes. To reduce the number of neural network (NN) approximators used and make the convergence of neural weights verified easily, state variables are introduced to transform the state-feedback control of the original strict-feedback systems into the ...
Adaptive Fuzzy Decentralized Control for a Class of Strong Interconnected Nonlinear Systems With Unmodeled Dynamics. The state-feedback decentralized stabilization problem is considered for interconnected nonlinear systems in the presence of unmodeled dynamics. The functional relationship in affine form between the strong interconnected functions and error signals is established, which makes backstepping-based fuzzy control successfully generalized to strong interconnected nonlinear systems. By combining adaptiv...
Adaptive Fuzzy Hierarchical Sliding-Mode Control for a Class of MIMO Nonlinear Time-Delay Systems With Input Saturation. In this paper, an adaptive fuzzy hierarchical sliding-mode control method for a class of multiinput multioutput unknown nonlinear time-delay systems with input saturation is proposed. The studied system is first transformed into an equivalent system. Subsequently, based on sliding-mode control technology and the concept of hierarchical design, a set of adaptive fuzzy hierarchical sliding-mode cont...
Adaptive Fuzzy Containment Control for Multiple Uncertain Euler&#x2013;Lagrange Systems With an Event-Based Observer AbstractThis paper considers the containment control problem for multiple Euler–Lagrange systems with unknown nonlinear dynamics, where the dynamics of the leaders are different from those of the followers. In addition, some followers cannot obtain information of the leaders owing to the limited communication range. We first adopt an event-based observer to estimate a trajectory inside the convex hull spanned by states of the leaders, in which continuous communication can be avoided as well. Then, we further utilize the fuzzy logic systems to approximate the unknown nonlinear dynamics and propose an adaptive control scheme. Under the proposed scheme, we can ensure that the states of the followers can converge to the convex hull formed by these states of the leaders. Finally, a simulation example is given to validate the effectiveness of the proposed control scheme.
On Stability and Stabilization of T–S Fuzzy Systems With Time-Varying Delays via Quadratic Fuzzy Lyapunov Matrix This article proposes improved stability and stabilization criteria for Takagi–Sugeno (T–S) fuzzy systems with time-varying delays. First, a novel augmented fuzzy Lyapunov–Krasovskii functional (LKF) including the quadratic fuzzy Lyapunov matrix is constructed, which can provide much information of T–S fuzzy systems and help to achieve the lager allowable delay upper bounds. Then, improved delay-dependent stability and stabilization criteria are derived for the studied systems. Compared with the traditional methods, since the third-order Bessel–Legendre inequality and the extended reciprocally convex matrix inequality are well employed in the derivative of the constructed LKF to give tighter bounds of the single integral terms, the conservatism of derived criteria is further reduced. In addition, the quadratic fuzzy Lyapunov matrix introduced in LKF, which contains the quadratic membership functions, is also an important reason for obtaining less conservative results. Finally, numerical examples demonstrate that the proposed method is less conservative than some existing ones and the studied system can be well controlled by the designed controller.
Fuzzy logic in control systems: fuzzy logic controller. I.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Competition in Service Industries We analyze a general market for an industry of competing service facilities. Firms differentiate themselves by their price levels and the waiting time their customers experience, as well as different attributes not determined directly through competition. Our model therefore assumes that the expected demand experienced by a given firm may depend on all of the industry's price levels as well as a (steady-state) waiting-time standard, which each of the firms announces and commits itself to by proper adjustment of its capacity level. We focus primarily on a separable specification, which in addition is linear in the prices. (Alternative nonseparable or nonlinear specifications are discussed in the concluding section.) We define a firm's service level as the difference between an upper-bound benchmark for the waiting-time standard (w脤聟) and the firm's actual waiting-time standard. Different types of competition and the resulting equilibrium behavior may arise, depending on the industry dynamics through which the firms select their strategic choices. In one case, firms may initially select their waiting-time standards, followed by a selection of their prices in a second stage (service-level first). Alternatively, the sequence of strategic choices may be reversed (price first) or, as a third alternative, the firms may make their choices simultaneously (simultaneous competition). We model each of the service facilities as a single-server M/M/1 queueing facility, which receives a given firm-specific price for each customer served. Each firm incurs a given cost per customer served as well as cost per unit of time proportional to its adopted capacity level.
Optimal Scheduling for Quality of Monitoring in Wireless Rechargeable Sensor Networks Wireless Rechargeable Sensor Network (WRSN) is an emerging technology to address the energy constraint in sensor networks. The protocol design in WRSN is extremely challenging due to the complicated interactions between rechargeable sensor nodes and readers, capable of mobility and functioning as energy distributors and data collectors. In this paper, we for the first time investigate the optimal scheduling problem in WRSN for stochastic event capture, i.e., how to jointly mobilize the readers for energy distribution and schedule sensor nodes for optimal quality of monitoring (QoM). We analyze the QoM for three application scenarios: i) the reader travels at a fixed speed to recharge sensor nodes and sensor nodes consume the collected energy in an aggressive way, ii) the reader stops to recharge sensor nodes for a predefined time during its periodic traveling and sensor nodes deplete energy aggressively, iii) the reader stops to recharge sensor nodes but sensor nodes can adopt optimal duty cycle scheduling for maximal QoM. We provide analytical results for achieving the optimal QoM under arbitrary parameter settings. Extensive simulation results are offered to demonstrate the correctness and effectiveness of our results.
A Tutorial on UAVs for Wireless Networks: Applications, Challenges, and Open Problems. The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing in a wide range of wireless networking applications. In particular, with their inherent attributes such as mobility, flexibility, and adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. For instance, UAVs can be deployed to complement existing cellular systems by providing additional capacity to hotspot areas as well as to provide network coverage in emergency and public safety situations. On the other hand, UAVs can operate as flying mobile terminals within the cellular networks. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as three-dimensional deployment, performance analysis, air-to-ground channel modeling, and energy efficiency are explored along with representative results. Then, fundamental open problems and potential research directions pertaining to wireless communications and networking with UAVs are introduced. To cope with the open research problems, various analytical frameworks and mathematical tools such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.059406
0.055125
0.055125
0.055125
0.032562
0.018375
0.005
0.002
0.000357
0
0
0
0
0
Licensed and Unlicensed Spectrum Management for Cognitive M2M: A Context-Aware Learning Approach Edge computing has emerged as a promising solution for relieving the tension between resource-limited machine type devices (MTDs) and computational-intensive tasks. To realize successful task offloading with limited spectrum, we focus on the cognitive machine-to-machine (CM2M) paradigm which enables a massive number of MTDs to either opportunistically use the licensed spectrum that is temporarily available, or to exploit the under-utilized unlicensed spectrum. We formulate the channel selection problem with both licensed and unlicensed spectrum as an adversarial multi-armed bandit (MAB) problem, and combine the exponential-weight algorithm for exploration and exploitation (EXP3) and Lyapunov optimization to develop a context-aware channel selection algorithm named C <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> -EXP3. C <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> -EXP3 can learn the long-term optimal channel selection strategy based on only local information, while dynamically achieving service reliability awareness, energy awareness, and backlog awareness. Specifically, we provide a rigorous theoretical analysis and prove that C <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> -EXP3 can achieve a bounded deviation from the optimal performance with global state information. Four existing algorithms are compared with C <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> -EXP3 to demonstrate its effectiveness and reliability under various simulation settings.
IoT-U: Cellular Internet-of-Things Networks Over Unlicensed Spectrum. In this paper, we consider an uplink cellular Internet-of-Things (IoT) network, where a cellular user (CU) can serve as the mobile data aggregator for a cluster of IoT devices. To be specific, the IoT devices can either transmit the sensory data to the base station (BS) directly by cellular communications, or first aggregate the data to a CU through machine-to-machine (M2M) communications before t...
Cognitive Capacity Harvesting Networks: Architectural Evolution Toward Future Cognitive Radio Networks. Cognitive radio technologies enable users to opportunistically access unused licensed spectrum and are viewed as a promising way to deal with the current spectrum crisis. Over the last 15 years, cognitive radio technologies have been extensively studied from algorithmic design to practical implementation. One pressing and fundamental problem is how to integrate cognitive radios into current wirele...
Chinese Remainder Theorem-Based Sequence Design for Resource Block Assignment in Relay-Assisted Internet-of-Things Communications. Terminal relays are expected to play a key role in facilitating the communication between base stations and low-cost power-constrained cellular Internet of Things (IoT) devices. However, these mobile relays require a mechanism by which they can autonomously assign the available resource blocks (RBs) to their assisted IoT devices in the absence of channel state information (CSI) and with minimal as...
A Bio-Inspired Solution to Cluster-Based Distributed Spectrum Allocation in High-Density Cognitive Internet of Things With the emergence of Internet of Things (IoT), where any device is able to connect to the Internet and monitor/control physical elements, several applications were made possible, such as smart cities, smart health care, and smart transportation. The wide range of the requirements of these applications drives traditional IoT to cognitive IoT (CIoT) that supports smart resource allocation, automatic network operation and intelligent service provisioning. To enable CIoT, there is a need for flexible and reliable wireless communication. In this paper, we propose to combine cognitive radio (CR) with a biological mechanism called reaction–diffusion to provide efficient spectrum allocation for CIoT. We first formulate the quantization of qualitative connectivity-flexibility tradeoff problem to determine the optimal cluster size (i.e., number of cluster members) that maximizes clustered throughput but minimizes communication delay. Then, we propose a bio-inspired algorithm which is used by CIoT devices to form cluster distributedly. We compute the optimal values of the algorithm’s parameters (e.g., contention window) of the proposed algorithm to increase the network’s adaption to different scenarios (e.g., spectrum homogeneity and heterogeneity) and to decrease convergence time, communication overhead, and computation complexity. We conduct a theoretical analysis to validate the correctness and effectiveness of proposed bio-inspired algorithm. Simulation results show that the proposed algorithm can achieve excellent clustering performance in different scenarios.
Energy Minimization of Multi-cell Cognitive Capacity Harvesting Networks with Neighbor Resource Sharing In this paper, we investigate the energy minimization problem for a cognitive capacity harvesting network (CCHN), where secondary users (SUs) without cognitive radio (CR) capability communicate with CR routers via device-to-device (D2D) transmissions, and CR routers connect with base stations (BSs) via CR links. Different from traditional D2D networks that D2D transmissions share the resource of c...
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
ImageNet Classification with Deep Convolutional Neural Networks. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Collaborative privacy management The landscape of the World Wide Web with all its versatile services heavily relies on the disclosure of private user information. Unfortunately, the growing amount of personal data collected by service providers poses a significant privacy threat for Internet users. Targeting growing privacy concerns of users, privacy-enhancing technologies emerged. One goal of these technologies is the provision of tools that facilitate a more informative decision about personal data disclosures. A famous PET representative is the PRIME project that aims for a holistic privacy-enhancing identity management system. However, approaches like the PRIME privacy architecture require service providers to change their server infrastructure and add specific privacy-enhancing components. In the near future, service providers are not expected to alter internal processes. Addressing the dependency on service providers, this paper introduces a user-centric privacy architecture that enables the provider-independent protection of personal data. A central component of the proposed privacy infrastructure is an online privacy community, which facilitates the open exchange of privacy-related information about service providers. We characterize the benefits and the potentials of our proposed solution and evaluate a prototypical implementation.
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Exploitation whale optimization based optimal offloading approach and topology optimization in a mobile ad hoc cloud environment Widespread availability of network technologies, mobile user request is increased day by day. The larger amount of energy utilization and resource sufficiency of cloud computing is to create the maximum capacity of exploration and exploitation as troublesome. In this paper, we proposed the formation of mobile user behavior based topology and its optimization. During the offloading process, the minimization of response time and the energy consumption is the major goal of this paper. The topology node formations are performed via improved text rank algorithm (ITRA) and neural network (NN) classifiers with Euclidian distance. In this paper, we introduced an effective optimization algorithm of the exploitation whale optimization algorithm (EWOA) and it is the combination of differential evaluation (DE) and whale optimization algorithms (WOA). The offloading process of proposed EWOA produces an optimal outcome of minimized energy consumption and response time. The implementation works of the proposed EWOA are carried out in the VMware platform. The performance of the proposed method is evaluated using different size puzzles, face detection applications, and state-of-art methods. Ultimately, our proposed method produces optimal accuracy and convergence speed with the minimized offloading process.
On the History of the Minimum Spanning Tree Problem It is standard practice among authors discussing the minimum spanning tree problem to refer to the work of Kruskal(1956) and Prim (1957) as the sources of the problem and its first efficient solutions, despite the citation by both of Boruvka (1926) as a predecessor. In fact, there are several apparently independent sources and algorithmic solutions of the problem. They have appeared in Czechoslovakia, France, and Poland, going back to the beginning of this century. We shall explore and compare these works and their motivations, and relate them to the most recent advances on the minimum spanning tree problem.
Smart home energy management system using IEEE 802.15.4 and zigbee Wireless personal area network and wireless sensor networks are rapidly gaining popularity, and the IEEE 802.15 Wireless Personal Area Working Group has defined no less than different standards so as to cater to the requirements of different applications. The ubiquitous home network has gained widespread attentions due to its seamless integration into everyday life. This innovative system transparently unifies various home appliances, smart sensors and energy technologies. The smart energy market requires two types of ZigBee networks for device control and energy management. Today, organizations use IEEE 802.15.4 and ZigBee to effectively deliver solutions for a variety of areas including consumer electronic device control, energy management and efficiency, home and commercial building automation as well as industrial plant management. We present the design of a multi-sensing, heating and airconditioning system and actuation application - the home users: a sensor network-based smart light control system for smart home and energy control production. This paper designs smart home device descriptions and standard practices for demand response and load management "Smart Energy" applications needed in a smart energy based residential or light commercial environment. The control application domains included in this initial version are sensing device control, pricing and demand response and load control applications. This paper introduces smart home interfaces and device definitions to allow interoperability among ZigBee devices produced by various manufacturers of electrical equipment, meters, and smart energy enabling products. We introduced the proposed home energy control systems design that provides intelligent services for users and we demonstrate its implementation using a real testbad.
Bee life-based multi constraints multicast routing optimization for vehicular ad hoc networks. A vehicular ad hoc network (VANET) is a subclass of mobile ad hoc networks, considered as one of the most important approach of intelligent transportation systems (ITS). It allows inter-vehicle communication in which their movement is restricted by a VANET mobility model and supported by some roadside base stations as fixed infrastructures. Multicasting provides different traffic information to a limited number of vehicle drivers by a parallel transmission. However, it represents a very important challenge in the application of vehicular ad hoc networks especially, in the case of the network scalability. In the applications of this sensitive field, it is very essential to transmit correct data anywhere and at any time. Consequently, the VANET routing protocols should be adapted appropriately and meet effectively the quality of service (QoS) requirements in an optimized multicast routing. In this paper, we propose a novel bee colony optimization algorithm called bees life algorithm (BLA) applied to solve the quality of service multicast routing problem (QoS-MRP) for vehicular ad hoc networks as NP-Complete problem with multiple constraints. It is considered as swarm-based algorithm which imitates closely the life of the colony. It follows the two important behaviors in the nature of bees which are the reproduction and the food foraging. BLA is applied to solve QoS-MRP with four objectives which are cost, delay, jitter, and bandwidth. It is also submitted to three constraints which are maximum allowed delay, maximum allowed jitter and minimum requested bandwidth. In order to evaluate the performance and the effectiveness of this realized proposal using C++ and integrated at the routing protocol level, a simulation study has been performed using the network simulator (NS2) based on a mobility model of VANET. The comparisons of the experimental results show that the proposed algorithm outperformed in an efficient way genetic algorithm (GA), bees algorithm (BA) and marriage in honey bees optimization (MBO) algorithm as state-of-the-art conventional metaheuristics applied to QoS-MRP problem with the same simulation parameters.
On the Spatiotemporal Traffic Variation in Vehicle Mobility Modeling Several studies have shown the importance of realistic micromobility and macromobility modeling in vehicular ad hoc networks (VANETs). At the macroscopic level, most researchers focus on a detailed and accurate description of road topology. However, a key factor often overlooked is a spatiotemporal configuration of vehicular traffic. This factor greatly influences network topology and topology variations. Indeed, vehicle distribution has high spatial and temporal diversity that depends on the time of the day and place attraction. This diversity impacts the quality of radio links and, thus, network topology. In this paper, we propose a new mobility model for vehicular networks in urban and suburban environments. To reproduce realistic network topology and topological changes, the model uses real static and dynamic data on the environment. The data concern particularly the topographic and socioeconomic characteristics of infrastructures and the spatiotemporal population distribution. We validate our model by comparing the simulation results with real data derived from individual displacement survey. We also present statistics on network topology, which show the interest of taking into account the spatiotemporal mobility variation.
A bio-inspired clustering in mobile adhoc networks for internet of things based on honey bee and genetic algorithm In mobile adhoc networks for internet of things, the size of routing table can be reduced with the help of clustering structure. The dynamic nature of MANETs and its complexity make it a type of network with high topology changes. To reduce the topology maintenance overhead, the cluster based structure may be used. Hence, it is highly desirable to design an algorithm that adopts quickly to topology dynamics and form balanced and stable clusters. In this article, the formulation of clustering problem is carried out initially. Later, an algorithm on the basis of honey bee algorithm, genetic algorithm and tabu search (GBTC) for internet of things is proposed. In this algorithm, the individual (bee) represents a possbile clustering structure and its fitness is evaluated on the basis of its stability and load balancing. A method is presented by merging the properties of honey bee and genetic algorithms to help the population to cope with the topology dynamics and produce top quality solutions that are closely related to each other. The simulation results conducted for validation show that the proposed work forms balance and stable clusters. The simulation results are compared with algorithms that do not consider the dynamic optimization requirements. The GTBC outperform existing algorithms in terms of network lifetime and clustering overhead etc.
An enhanced QoS CBT multicast routing protocol based on Genetic Algorithm in a hybrid HAP-Satellite system A QoS multicast routing scheme based on Genetic Algorithms (GA) heuristic is presented in this paper. Our proposal, called Constrained Cost–Bandwidth–Delay Genetic Algorithm (CCBD-GA), is applied to a multilayer hybrid platform that includes High Altitude Platforms (HAPs) and a Satellite platform. This GA scheme has been compared with another GA well-known in the literature called Multi-Objective Genetic Algorithm (MOGA) in order to show the proposed algorithm goodness. In order to test the efficiency of GA schemes on a multicast routing protocol, these GA schemes are inserted into an enhanced version of the Core-Based Tree (CBT) protocol with QoS support. CBT and GA schemes are tested in a multilayer hybrid HAP and Satellite architecture and interesting results have been discovered. The joint bandwidth–delay metrics can be very useful in hybrid platforms such as that considered, because it is possible to take advantage of the single characteristics of the Satellite and HAP segments. The HAP segment offers low propagation delay permitting QoS constraints based on maximum end-to-end delay to be met. The Satellite segment, instead, offers high bandwidth capacity with higher propagation delay. The joint bandwidth–delay metric permits the balancing of the traffic load respecting both QoS constraints. Simulation results have been evaluated in terms of HAP and Satellite utilization, bandwidth, end-to-end delay, fitness function and cost of the GA schemes.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Task Offloading in Vehicular Edge Computing Networks: A Load-Balancing Solution Recently, the rapid advance of vehicular networks has led to the emergence of diverse delay-sensitive vehicular applications such as automatic driving, auto navigation. Note that existing resource-constrained vehicles cannot adequately meet these demands on low / ultra-low latency. By offloading parts of the vehicles’ compute-intensive tasks to the edge servers in proximity, mobile edge computing is envisioned as a promising paradigm, giving rise to the vehicular edge computing networks (VECNs). However, most existing works on task offloading in VECNs did not take the load balancing of the computation resources at the edge servers into account. To address these issues and given the high dynamics of vehicular networks, we introduce fiber-wireless (FiWi) technology to enhance VECNs, due to its advantages on centralized network management and supporting multiple communication techniques. Aiming to minimize the processing delay of the vehicles’ computation tasks, we propose a software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs, where SDN is introduced to provide supports for the centralized network and vehicle information management. Extensive analysis and numerical results corroborate that our proposed load-balancing scheme can achieve superior performance on processing delay reduction by utilizing the edge servers’ computation resources more efficiently.
A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots Autonomous mobile robots navigating in changing and dynamic unstructured environments like the outdoor environments need to cope with large amounts of uncertainties that are inherent of natural environments. The traditional type-1 fuzzy logic controller (FLC) using precise type-1 fuzzy sets cannot fully handle such uncertainties. A type-2 FLC using type-2 fuzzy sets can handle such uncertainties to produce a better performance. In this paper, we present a novel reactive control architecture for autonomous mobile robots that is based on type-2 FLC to implement the basic navigation behaviors and the coordination between these behaviors to produce a type-2 hierarchical FLC. In our experiments, we implemented this type-2 architecture in different types of mobile robots navigating in indoor and outdoor unstructured and challenging environments. The type-2-based control system dealt with the uncertainties facing mobile robots in unstructured environments and resulted in a very good performance that outperformed the type-1-based control system while achieving a significant rule reduction compared to the type-1 system.
Multi-stage genetic programming: A new strategy to nonlinear system modeling This paper presents a new multi-stage genetic programming (MSGP) strategy for modeling nonlinear systems. The proposed strategy is based on incorporating the individual effect of predictor variables and the interactions among them to provide more accurate simulations. According to the MSGP strategy, an efficient formulation for a problem comprises different terms. In the first stage of the MSGP-based analysis, the output variable is formulated in terms of an influencing variable. Thereafter, the error between the actual and the predicted value is formulated in terms of a new variable. Finally, the interaction term is derived by formulating the difference between the actual values and the values predicted by the individually developed terms. The capabilities of MSGP are illustrated by applying it to the formulation of different complex engineering problems. The problems analyzed herein include the following: (i) simulation of pH neutralization process, (ii) prediction of surface roughness in end milling, and (iii) classification of soil liquefaction conditions. The validity of the proposed strategy is confirmed by applying the derived models to the parts of the experimental results that were not included in the analyses. Further, the external validation of the models is verified using several statistical criteria recommended by other researchers. The MSGP-based solutions are capable of effectively simulating the nonlinear behavior of the investigated systems. The results of MSGP are found to be more accurate than those of standard GP and artificial neural network-based models.
Placing Virtual Machines to Optimize Cloud Gaming Experience Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider's total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) fool pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Fusion of PHOG and LDP local descriptors for kernel-based ear biometric recognition Achieving higher recognition performance in uncontrolled scenarios is a key issue for ear biometric systems. It is almost difficult to generate all discriminative features by using a single feature extraction method. This paper presents an efficient method by combining the two most successful local feature descriptors such as Pyramid Histogram of Oriented Gradients (PHOG) and Local Directional Patterns (LDP) to represent ear images. The PHOG represents spatial shape information and the LDP efficiently encodes local texture information. As the feature sets are curse of high dimension, we used principal component analysis (PCA) to reduce the dimension prior to normalization and fusion. Then, two normalized heterogeneous feature sets are combined to produce single feature vector. Finally, the Kernel Discriminant Analysis (KDA) method is employed to extract nonlinear discriminant features for efficient recognition using a nearest neighbor (NN) classifier. Experiments on three standard datasets IIT Delhi version (I and II) and University of Notre Dame collection E reveal that the proposed method can achieve promising recognition performance in comparison with other existing successful methods.
Joint discriminative dimensionality reduction and dictionary learning for face recognition In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It cannot only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.
Sparse Representation Based Fisher Discrimination Dictionary Learning for Image Classification The employed dictionary plays an important role in sparse representation or sparse coding based image reconstruction and classification, while learning dictionaries from the training data has led to state-of-the-art results in image classification tasks. However, many dictionary learning models exploit only the discriminative information in either the representation coefficients or the representation residual, which limits their performance. In this paper we present a novel dictionary learning method based on the Fisher discrimination criterion. A structured dictionary, whose atoms have correspondences to the subject class labels, is learned, with which not only the representation residual can be used to distinguish different classes, but also the representation coefficients have small within-class scatter and big between-class scatter. The classification scheme associated with the proposed Fisher discrimination dictionary learning (FDDL) model is consequently presented by exploiting the discriminative information in both the representation residual and the representation coefficients. The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.
Feature and Rank Level Fusion for Privacy Preserved Multi-Biometric System AbstractPrivacy protection in biometric system is a newly emerging biometric technology that can provide the protection against various attacks by intruders. In this paper, the authors have presented a multi-level of random projection method based on face and ear biometric traits. Privacy preserved templates are used in the proposed system. The main idea behind the privacy preserve computation is the random projection algorithm. Multiple random projection matrixes are used to generate multiple templates for biometric authentication. Newly introduced random fusion method is used in the proposed system; therefore, proposed method can provide better template security, privacy and feature quality. Multiple randomly fused templates are used for recognition purpose and finally decision fusion is applied to generate the final classification result. The proposed method works in a similar way human cognition for face recognition works, furthermore it preserve privacy and multimodality of the system.
Ear recognition using local binary patterns: A comparative experimental study. •A comparative study of ear recognition using local binary patterns variants is done.•A new texture operator is proposed and used as an ear feature descriptor.•Detailed analysis on Identification and verification is conducted separately.•An approximated recognition rate of 99% is achieved by some texture descriptors.•The study has significant insights and can benefit researchers in future works.
Weighted sparse representation for human ear recognition based on local descriptor. A two-stage ear recognition framework is presented where two local descriptors and a sparse representation algorithm are combined. In a first stage, the algorithm proceeds by deducing a subset of the closest training neighbors to the test ear sample. The selection is based on the K-nearest neighbors classifier in the pattern of oriented edge magnitude feature space. In a second phase, the co-occurrence of adjacent local binary pattern features are extracted from the preselected subset and combined to form a dictionary. Afterward, sparse representation classifier is employed on the developed dictionary in order to infer the closest element to the test sample. Thus, by splitting up the ear image into a number of segments and applying the described recognition routine on each of them, the algorithm finalizes by attributing a final class label based on majority voting over the individual labels pointed out by each segment. Experimental results demonstrate the effectiveness as well as the robustness of the proposed scheme over leading state-of-the-art methods. Especially when the ear image is occluded, the proposed algorithm exhibits a great robustness and reaches the recognition performances outlined in the state of the art. (C) 2016 SPIE and IS&T
Non-negative dictionary based sparse representation classification for ear recognition with occlusion. By introducing an identity occlusion dictionary to encode the occluded part on the source image, sparse representation based classification has shown good performance on ear recognition under partial occlusion. However, large number of atoms of the conventional occlusion dictionary brings expensive computational load to the SRC model solving. In this paper, we propose a non-negative dictionary based sparse representation and classification scheme for ear recognition. The non-negative dictionary includes the Gabor features dictionary extracted from the ear images, and non-negative occlusion dictionary learned from the identity occlusion dictionary. A test sample with occlusion can be sparsely represented over the Gabor feature dictionary and the occlusion dictionary. The sparse coding coefficients are noted with non-negativity and much more sparsity, and the non-negative dictionary has shown increasing discrimination ability. Experimental results on the USTB ear database show that the proposed method performs better than existing ear recognition methods under partial occlusion based on SRC.
A Skin-Color and Template Based Technique for Automatic Ear Detection This paper proposes an efficient skin-color and template based technique for automatic ear detection in a side face image. The technique first separates skin regions from non skin regions and then searches for the ear within skin regions. Ear detection process involves three major steps. First, Skin Segmentation to eliminate all non-skin pixels from the image, second Ear Localization to perform ear detection using template matching approach, and third Ear Verification to validate the ear detection using the Zernike moments based shape descriptor. To handle the detection of ears of various shapes and sizes, an ear template is created considering the ears of various shapes (triangular, round, oval and rectangular) and resized automatically to a size suitable for the detection. Proposed technique is tested on the IIT Kanpur ear database consisting of 150 side face images and gives 94% accuracy.
Factual and Counterfactual Explanations for Black Box Decision Making. The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a ...
A vector-perturbation technique for near-capacity multiantenna multiuser communication-part I: channel inversion and regularization Recent theoretical results describing the sum capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum rates of tens of bits/channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the power of the transmitted signal. This work is comprised of two parts. In this first part, we show that while the sum capacity grows linearly with the minimum of the number of antennas and users, the sum rate of channel inversion does not. This poor performance is due to the large spread in the singular values of the channel matrix. We introduce regularization to improve the condition of the inverse and maximize the signal-to-interference-plus-noise ratio at the receivers. Regularization enables linear growth and works especially well at low signal-to-noise ratios (SNRs), but as we show in the second part, an additional step is needed to achieve near-capacity performance at all SNRs.
A Nonconservative LMI Condition for Stability of Switched Systems With Guaranteed Dwell Time. Ensuring stability of switched linear systems with a guaranteed dwell time is an important problem in control systems. Several methods have been proposed in the literature to address this problem, but unfortunately they provide sufficient conditions only. This technical note proposes the use of homogeneous polynomial Lyapunov functions in the non-restrictive case where all the subsystems are Hurwitz, showing that a sufficient condition can be provided in terms of an LMI feasibility test by exploiting a key representation of polynomials. Several properties are proved for this condition, in particular that it is also necessary for a sufficiently large degree of these functions. As a result, the proposed condition provides a sequence of upper bounds of the minimum dwell time that approximate it arbitrarily well. Some examples illustrate the proposed approach.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.11
0.1
0.1
0.1
0.1
0.1
0.042222
0.001667
0
0
0
0
0
0
Simultaneous wireless information and power transfer in modern communication systems Energy harvesting for wireless communication networks is a new paradigm that allows terminals to recharge their batteries from external energy sources in the surrounding environment. A promising energy harvesting technology is wireless power transfer where terminals harvest energy from electromagnetic radiation. Thereby, the energy may be harvested opportunistically from ambient electromagnetic sources or from sources that intentionally transmit electromagnetic energy for energy harvesting purposes. A particularly interesting and challenging scenario arises when sources perform simultaneous wireless information and power transfer (SWIPT), as strong signals not only increase power transfer but also interference. This article provides an overview of SWIPT systems with a particular focus on the hardware realization of rectenna circuits and practical techniques that achieve SWIPT in the domains of time, power, antennas, and space. The article also discusses the benefits of a potential integration of SWIPT technologies in modern communication networks in the context of resource allocation and cooperative cognitive radio networks.
Wireless energy harvesting for the Internet of Things The Internet of Things (IoT) is an emerging computing concept that describes a structure in which everyday physical objects, each provided with unique identifiers, are connected to the Internet without requiring human interaction. Long-term and self-sustainable operation are key components for realization of such a complex network, and entail energy-aware devices that are potentially capable of harvesting their required energy from ambient sources. Among different energy harvesting methods, such as vibration, light, and thermal energy extraction, wireless energy harvesting (WEH) has proven to be one of the most promising solutions by virtue of its simplicity, ease of implementation, and availability. In this article, we present an overview of enabling technologies for efficient WEH, analyze the lifetime of WEH-enabled IoT devices, and briefly study the future trends in the design of efficient WEH systems and research challenges that lie ahead.
Mobility in wireless sensor networks - Survey and proposal. Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time.
A survey on cross-layer solutions for wireless sensor networks Ever since wireless sensor networks (WSNs) have emerged, different optimizations have been proposed to overcome their constraints. Furthermore, the proposal of new applications for WSNs have also created new challenges to be addressed. Cross-layer approaches have proven to be the most efficient optimization techniques for these problems, since they are able to take the behavior of the protocols at each layer into consideration. Thus, this survey proposes to identify the key problems of WSNs and gather available cross-layer solutions for them that have been proposed so far, in order to provide insights on the identification of open issues and provide guidelines for future proposals.
OPPC: An Optimal Path Planning Charging Scheme Based on Schedulability Evaluation for WRSNs. The lack of schedulability evaluation of previous charging schemes in wireless rechargeable sensor networks (WRSNs) degrades the charging efficiency, leading to node exhaustion. We propose an Optimal Path Planning Charging scheme, namely OPPC, for the on-demand charging architecture. OPPC evaluates the schedulability of a charging mission, which makes charging scheduling predictable. It provides an optimal charging path which maximizes charging efficiency. When confronted with a non-schedulable charging mission, a node discarding algorithm is developed to enable the schedulability. Experimental simulations demonstrate that OPPC can achieve better performance in successful charging rate as well as charging efficiency.
Machine learning algorithms for wireless sensor networks: A survey. •The survey of machine learning algorithms for WSNs from the period 2014 to March 2018.•Machine learning (ML) for WSNs with their advantages, features and limitations.•A statistical survey of ML-based algorithms for WSNs.•Reasons to choose a ML techniques to solve issues in WSNs.•The survey proposes a discussion on open issues.
Task Scheduling for Energy-Harvesting-Based IoT: A Survey and Critical Analysis The Internet of Things (IoT) has important applications in our daily lives, including health and fitness tracking, environmental monitoring, and transportation. However, sensor nodes in IoT suffer from the limited lifetime of batteries resulting from their finite energy availability. A promising solution is to harvest energy from environmental sources, such as solar, kinetic, thermal, and radio-fr...
Advances in Energy Harvesting Communications: Past, Present, and Future Challenges. Recent emphasis on green communications has generated great interest in the investigations of energy harvesting communications and networking. Energy harvesting from ambient energy sources can potentially reduce the dependence on the supply of grid or battery energy, providing many attractive benefits to the environment and deployment. However, unlike the conventional stable energy, the intermitte...
Optimizing Charging Efficiency and Maintaining Sensor Network Perpetually in Mobile Directional Charging. Wireless Power Transfer (WPT) is a promising technology to replenish energy of sensors in Rechargeable Wireless Sensor Networks (RWSN). In this paper, we investigate the mobile directional charging optimization problem in RWSN. Our problem is how to plan the moving path and charging direction of the Directional Charging Vehicle (DCV) in the 2D plane to replenish energy for RWSN. The objective is to optimize energy charging efficiency of the DCV while maintaining the sensor network working continuously. To the best of our knowledge, this is the first work to study the mobile directional charging problem in RWSN. We prove that the problem is NP-hard. Firstly, the coverage utility of the DCV's directional charging is proposed. Then we design an approximation algorithm to determine the docking spots and their charging orientations while minimizing the number of the DCV's docking spots and maximizing the charging coverage utility. Finally, we propose a moving path planning algorithm for the DCV's mobile charging to optimize the DCV's energy charging efficiency while ensuring the networks working continuously. We theoretically analyze the DCV's charging service capability, and perform the comprehensive simulation experiments. The experiment results show the energy efficiency of the DCV is higher than the omnidirectional charging model in the sparse networks.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
A grounded investigation of game immersion The term immersion is widely used to describe games but it is not clear what immersion is or indeed if people are using the same word consistently. This paper describes work done to define immersion based on the experiences of gamers. Grounded Theory is used to construct a robust division of immersion into the three levels: engagement, engrossment and total immersion. This division alone suggests new lines for investigating immersion and transferring it into software domains other than games.
The smooth switching control for TORA system via LMIs This paper investigates the smooth switching control approach for linear parameter varying systems with applications to the nonlinear translational oscillator with rotational actuator (TORA) systems. In the smooth switching control, the control law for neighboring subsystems is smoothly switched or scheduled in the overlapped region instead of the usual instant switching methods. The design approach is presented in terms of the obtained maximum relative stability under control input and system output variables constraints. The control law is constructed based on the formulated linear matrix inequality conditions. As shown, the designed performance can be reasonably improved with the increase of number of partitions of the considered varying parameters range.
Deep Anomaly Detection with Deviation Networks Although deep learning has been applied to successfully address many data mining problems, relatively limited work has been done on deep learning for anomaly detection. Existing deep anomaly detection methods, which focus on learning new feature representations to enable downstream anomaly detection methods, perform indirect optimization of anomaly scores, leading to data-inefficient learning and suboptimal anomaly scoring. Also, they are typically designed as unsupervised learning due to the lack of large-scale labeled anomaly data. As a result, they are difficult to leverage prior knowledge (e.g., a few labeled anomalies) when such information is available as in many real-world anomaly detection applications. This paper introduces a novel anomaly detection framework and its instantiation to address these problems. Instead of representation learning, our method fulfills an end-to-end learning of anomaly scores by a neural deviation learning, in which we leverage a few (e.g., multiple to dozens) labeled anomalies and a prior probability to enforce statistically significant deviations of the anomaly scores of anomalies from that of normal data objects in the upper tail. Extensive results show that our method can be trained substantially more data-efficiently and achieves significantly better anomaly scoring than state-of-the-art competing methods.
Driver’s Intention Identification With the Involvement of Emotional Factors in Two-Lane Roads Driver’s emotion is a psychological reaction to environmental stimulus. Driver intention is an internal state of mind, which directs the actions in the next moment during driving. Emotions usually have a strong influence on behavioral intentions. Therefore, emotion is an important factor that should be considered, to accurately identify driver’s intention. This study used the support vector machin...
1.055033
0.052963
0.05
0.05
0.05
0.05
0.05
0.026919
0.011111
0
0
0
0
0
Tie-breaker: Using language models to quantify gender bias in sports journalism. Gender bias is an increasingly important issue in sports journalism. In this work, we propose a language-model-based approach to quantify differences in questions posed to female vs. male athletes, and apply it to tennis post-match interviews. We find that journalists ask male players questions that are generally more focused on the game when compared with the questions they ask their female counterparts. We also provide a fine-grained analysis of the extent to which the salience of this bias depends on various factors, such as question type, game outcome or player rank.
Hafez: An Interactive Poetry Generation System Hafez is an automatic poetry generation system that integrates a Recurrent Neural Network (RNN) with a Finite State Acceptor (FSA). It generates sonnets given arbitrary topics. Furthermore, Hafez enables users to revise and polish generated poems by adjusting various style configurations. Experiments demonstrate that such "polish" mechanisms consider the user's intention and lead to a better poem. For evaluation, we build a web interface where users can rate the quality of each poem from 1 to 5 stars. We also speed up the whole system by a factor of 10, via vocabulary pruning and GPU computation, so that adequate feedback can be collected at a fast pace. Based on such feedback, the system learns to adjust its parameters to improve poetry quality.
Reducing Gender Bias in Abusive Language Detection. Abusive language detection models tend to have a problem of being biased toward identity words of a certain group of people because of imbalanced training datasets. For example, You are a good woman was considered sexist when trained on an existing dataset. Such model bias is an obstacle for models to be robust enough for practical use. In this work, we measure gender biases on models trained with different abusive language datasets, while analyzing the effect of different pre-trained word embeddings and model architectures. We also experiment with three bias mitigation methods: (1) debiased word embeddings, (2) gender swap data augmentation, and (3) fine-tuning with a larger corpus. These methods can effectively reduce gender bias by 90-98% and can be extended to correct model bias in other scenarios.
Plug and Play Language Models: A Simple Approach to Controlled Text Generation Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.
Gender Bias in Neural Natural Language Processing. We examine whether neural natural language processing (NLP) systems reflect historical biases in training data. We define a general benchmark to quantify gender bias in a variety of neural NLP tasks. Our empirical evaluation with state-of-the-art neural coreference resolution and textbook RNN-based language models trained on benchmark datasets finds significant gender bias in how models view occupations. We then mitigate bias with CDA: a generic methodology for corpus augmentation via causal interventions that breaks associations between gendered and gender-neutral words. We empirically show that CDA effectively decreases gender bias while preserving accuracy. We also explore the space of mitigation strategies with CDA, a prior approach to word embedding debiasing (WED), and their compositions. We show that CDA outperforms WED, drastically so when word embeddings are trained. For pre-trained embeddings, the two methods can be effectively composed. We also find that as training proceeds on the original data set with gradient descent the gender bias grows as the loss reduces, indicating that the optimization encourages bias; CDA mitigates this behavior.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Get To The Point: Summarization With Pointer-Generator Networks Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
Adaptive Federated Learning in Resource Constrained Edge Computing Systems Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a cen...
Image forgery detection We are undoubtedly living in an age where we are exposed to a remarkable array of visual imagery. While we may have historically had confidence in the integrity of this imagery, today&#39;s digital technology has begun to erode this trust. From the tabloid magazines to the fashion industry and in mainstream media outlets, scientific journals, political campaigns, courtrooms, and the photo hoaxes that ...
Development of a UAV-LiDAR System with Application to Forest Inventory We present the development of a low-cost Unmanned Aerial Vehicle-Light Detecting and Ranging (UAV-LiDAR) system and an accompanying workflow to produce 3D point clouds. UAV systems provide an unrivalled combination of high temporal and spatial resolution datasets. The TerraLuma UAV-LiDAR system has been developed to take advantage of these properties and in doing so overcome some of the current limitations of the use of this technology within the forestry industry. A modified processing workflow including a novel trajectory determination algorithm fusing observations from a GPS receiver, an Inertial Measurement Unit (IMU) and a High Definition (HD) video camera is presented. The advantages of this workflow are demonstrated using a rigorous assessment of the spatial accuracy of the final point clouds. It is shown that due to the inclusion of video the horizontal accuracy of the final point cloud improves from 0.61 m to 0.34 m (RMS error assessed against ground control). The effect of the very high density point clouds (up to 62 points per m(2)) produced by the UAV-LiDAR system on the measurement of tree location, height and crown width are also assessed by performing repeat surveys over individual isolated trees. The standard deviation of tree height is shown to reduce from 0.26 m, when using data with a density of 8 points per m(2), to 0.15 m when the higher density data was used. Improvements in the uncertainty of the measurement of tree location, 0.80 m to 0.53 m, and crown width, 0.69 m to 0.61 m are also shown.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading in Wireless Sensor Networks In this paper, a three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer. The framework employs distributed load balanced clustering and dual data uploading, which is referred to as LBC-DDU. The objective is to achieve good scalability, long network lifetime and low data collection latency. At the sensor layer, a distributed load balanced clustering (LBC) algorithm is proposed for sensors to self-organize themselves into clusters. In contrast to existing clustering methods, our scheme generates multiple cluster heads in each cluster to balance the work load and facilitate dual data uploading. At the cluster head layer, the inter-cluster transmission range is carefully chosen to guarantee the connectivity among the clusters. Multiple cluster heads within a cluster cooperate with each other to perform energy-saving inter-cluster communications. Through inter-cluster transmissions, cluster head information is forwarded to SenCar for its moving trajectory planning. At the mobile collector layer, SenCar is equipped with two antennas, which enables two cluster heads to simultaneously upload data to SenCar in each time by utilizing multi-user multiple-input and multiple-output (MU-MIMO) technique. The trajectory planning for SenCar is optimized to fully utilize dual data uploading capability by properly selecting polling points in each cluster. By visiting each selected polling point, SenCar can efficiently gather data from cluster heads and transport the data to the static data sink. Extensive simulations are conducted to evaluate the effectiveness of the proposed LBC-DDU scheme. The results show that when each cluster has at most two cluster heads, LBC-DDU achieves over 50 percent energy saving per node and 60 percent energy saving on cluster heads comparing with data collection through multi-hop relay to the static data sink, and 20 percent - horter data collection time compared to traditional mobile data gathering.
An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation. •The proposed watermarking scheme utilized improved discrete wavelet transformation (IDWT) to retrieve the invariant wavelet domain.•The entropy mechanism is used to identify the suitable region for insertion of watermark. This will improve the imperceptibility and robustness of the watermarking procedure.•The scaling factors such as PSNR and NC are considered for evaluation of the proposed method and the Particle Swarm Optimization is employed to optimize the scaling factors.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.222
0.222
0.222
0.222
0.111
0.031714
0.003333
0
0
0
0
0
0
0
A brief review of the ear recognition process using deep neural networks. The process of precisely recognize people by ears has been getting major attention in recent years. It represents an important step in the biometric research, especially as a complement to face recognition systems which have difficult in real conditions. This is due to the great variation in shapes, variable lighting conditions, and the changing profile shape which is a planar representation of a complex object. An ear recognition system involving a convolutional neural networks (CNN) is proposed to identify a person given an input image. The proposed method matches the performance of other traditional approaches when analyzed against clean photographs. However, the F1 metric of the results shows improvements in specificity of the recognition. We also present a technique for improving the speed of a CNN applied to large input images through the optimization of the sliding window approach.
Plastic surgery: a new dimension to face recognition Advancement and affordability is leading to the popularity of plastic surgery procedures. Facial plastic surgery can be reconstructive to correct facial feature anomalies or cosmetic to improve the appearance. Both corrective as well as cosmetic surgeries alter the original facial information to a large extent thereby posing a great challenge for face recognition algorithms. The contribution of this research is 1) preparing a face database of 900 individuals for plastic surgery, and 2) providing an analytical and experimental underpinning of the effect of plastic surgery on face recognition algorithms. The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance. Therefore, it is imperative to initiate a research effort so that future face recognition systems will be able to address this important problem.
One-class support vector machines: an application in machine fault detection and classification Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data.
Deep learning human actions from video via sparse filtering and locally competitive algorithms Physiological and psychophysical evidence suggest that early visual cortex compresses the visual input on the basis of spatial and orientation-tuned filters. Recent computational advances have suggested that these neural response characteristics may reflect a `sparse coding' architecture--in which a small number of neurons need to be active for any given image--yielding critical structure latent in natural scenes. Here we present a novel neural network architecture combining a sparse filter model and locally competitive algorithms (LCAs), and demonstrate the network's ability to classify human actions from video. Sparse filtering is an unsupervised feature learning algorithm designed to optimize the sparsity of the feature distribution directly without having the need to model the data distribution. LCAs are defined by a system of differential equations where the initial conditions define an optimization problem and the dynamics converge to a sparse decomposition of the input vector. We applied this architecture to train a classifier on categories of motion in human action videos. Inputs to the network were small 3D patches taken from frame differences in the videos. Dictionaries were derived for each action class and then activation levels for each dictionary were assessed during reconstruction of a novel test patch. Overall, classification accuracy was at ¿ 97 %. We discuss how this sparse filtering approach provides a natural framework for multi-sensory and multimodal data processing including RGB video, RGBD video, hyper-spectral video, and stereo audio/video streams.
A novel geometric feature extraction method for ear recognition. We proposed a novel geometric feature extraction approach for ear image.Both the maximum and the minimum ear height lines are used to characterize the contour of outer helix.Our method achieves recognition rate of 98.33 on the USTB subset1 and of 99.6 on the IIT Delhi database.Our geometric method can be combined with the appearance approaches to improve the recognition performance. The discriminative ability of geometric features can be well supported by empirical studies in ear recognition. Recently, a number of methods have been suggested for geometric feature extraction from ear images. However, these methods usually have relatively high feature dimension or are sensitive to rotation and scale variations. In this paper, we propose a novel geometric feature extraction method to address these issues. First, our studies show that the minimum Ear Height Line (EHL) is also helpful to characterize the contour of outer helix, and the combination of maximal EHL and minimum EHL can achieve better recognition performance. Second, we further extract three ratio-based features which are robust to scale variation. Our method has the feature dimension of six, and thus is efficient in matching for real-time ear recognition. Experimental results on two popular databases, i.e. USTB subset1 and IIT Delhi, show that the proposed approach can achieve promising recognition rates of 98.33% and 99.60%, respectively.
A survey on ear biometrics Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non-contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion, earprint forensics, ear symmetry, ear classification, and ear individuality. This article provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers.
Automated human identification using ear imaging This paper investigates a new approach for the automated human identification using 2D ear imaging. We present a completely automated approach for the robust segmentation of curved region of interest using morphological operators and Fourier descriptors. We also investigate new feature extraction approach for ear identification using localized orientation information and also examine local gray-level phase information using complex Gabor filters. Our investigation develops a computationally attractive and effective alternative to characterize the automatically segmented ear images using a pair of log-Gabor filters. The experimental results achieve average rank-one recognition accuracy of 96.27% and 95.93%, respectively, on the publicly available database of 125 and 221 subjects. Our experimental results from the authentication experiments and false positive identification verses false negative identification also suggest the superiority of the proposed approach over the other popular feature extraction approach considered in this work.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
TripRes: Traffic Flow Prediction Driven Resource Reservation for Multimedia IoV with Edge Computing AbstractThe Internet of Vehicles (IoV) connects vehicles, roadside units (RSUs) and other intelligent objects, enabling data sharing among them, thereby improving the efficiency of urban traffic and safety. Currently, collections of multimedia content, generated by multimedia surveillance equipment, vehicles, and so on, are transmitted to edge servers for implementation, because edge computing is a formidable paradigm for accommodating multimedia services with low-latency resource provisioning. However, the uneven or discrete distribution of the traffic flow covered by edge servers negatively affects the service performance (e.g., overload and underload) of edge servers in multimedia IoV systems. Therefore, how to accurately schedule and dynamically reserve proper numbers of resources for multimedia services in edge servers is still challenging. To address this challenge, a traffic flow prediction driven resource reservation method, called TripRes, is developed in this article. Specifically, the city map is divided into different regions, and the edge servers in a region are treated as a “big edge server” to simplify the complex distribution of edge servers. Then, future traffic flows are predicted using the deep spatiotemporal residual network (ST-ResNet), and future traffic flows are used to estimate the amount of multimedia services each region needs to offload to the edge servers. With the number of services to be offloaded in each region, their offloading destinations are determined through latency-sensitive transmission path selection. Finally, the performance of TripRes is evaluated using real-world big data with over 100M multimedia surveillance records from RSUs in Nanjing China.
Experiment-driven Characterization of Full-Duplex Wireless Systems We present an experiment-based characterization of passive suppression and active self-interference cancellation mechanisms in full-duplex wireless communication systems. In particular, we consider passive suppression due to antenna separation at the same node, and active cancellation in analog and/or digital domain. First, we show that the average amount of cancellation increases for active cance...
Decentralized Plug-in Electric Vehicle Charging Selection Algorithm in Power Systems This paper uses a charging selection concept for plug-in electric vehicles (PEVs) to maximize user convenience levels while meeting predefined circuit-level demand limits. The optimal PEV-charging selection problem requires an exhaustive search for all possible combinations of PEVs in a power system, which cannot be solved for the practical number of PEVs. Inspired by the efficiency of the convex relaxation optimization tool in finding close-to-optimal results in huge search spaces, this paper proposes the application of the convex relaxation optimization method to solve the PEV-charging selection problem. Compared with the results of the uncontrolled case, the simulated results indicate that the proposed PEV-charging selection algorithm only slightly reduces user convenience levels, but significantly mitigates the impact of the PEV-charging on the power system. We also develop a distributed optimization algorithm to solve the PEV-charging selection problem in a decentralized manner, i.e., the binary charging decisions (charged or not charged) are made locally by each vehicle. Using the proposed distributed optimization algorithm, each vehicle is only required to report its power demand rather than report several of its private user state information, mitigating the security problems inherent in such problem. The proposed decentralized algorithm only requires low-speed communication capability, making it suitable for real-time implementation.
Wireless Networks with RF Energy Harvesting: A Contemporary Survey Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirements. In this paper, we present a comprehensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RFEHNs according to the network types, i.e., single-hop networks, multi-antenna networks, relay networks, and cognitive radio networks. Finally, we envision some open research directions.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.120444
0.114222
0.114222
0.114222
0.063333
0.039935
0.021496
0
0
0
0
0
0
0
An Identity-Based And Revocable Data-Sharing Scheme In Vanets Ensuring data confidentiality in a vehicular ad hoc network (VANET) is an increasingly important issue. Message confidentiality, user privacy and access control are the most important problems that affect services provided by VANETs. However, access control that addresses data downloads while preserving users' privacy remains an open problem. Based on a set of attributes, the ciphertext-policy attribute-based encryption (CP-ABE) algorithm proposes a party data encryption/decryption mechanism for shared data; consequently, the algorithm has become a popular solution for data-sharing access control. However, the current CP-ABE schemes are still infeasible for VANETs because these schemes use a single authority and inefficient encryption/decryption and ignore revocation mechanisms. Here, over CP-ABE with revocation, we introduce an identity-based scheme that achieves secure data sharing in VANETs. To reduce the computation load for in-vehicle on-board units (OBUs), we outsource computationally intensive encryption and decryption operations to cloud compute nodes. Attributes are decentralized and managed by application service providers that provide services to vehicles based on subscriptions. Comprehensive experimental results and security analysis show that our scheme achieves fine-grained access control while preserving user privacy. Through implementation, performance analysis demonstrates that our scheme is suitable for VANETs.
SmartVeh: Secure and Efficient Message Access Control and Authentication for Vehicular Cloud Computing. With the growing number of vehicles and popularity of various services in vehicular cloud computing (VCC), message exchanging among vehicles under traffic conditions and in emergency situations is one of the most pressing demands, and has attracted significant attention. However, it is an important challenge to authenticate the legitimate sources of broadcast messages and achieve fine-grained message access control. In this work, we propose SmartVeh, a secure and efficient message access control and authentication scheme in VCC. A hierarchical, attribute-based encryption technique is utilized to achieve fine-grained and flexible message sharing, which ensures that vehicles whose persistent or dynamic attributes satisfy the access policies can access the broadcast message with equipped on-board units (OBUs). Message authentication is enforced by integrating an attribute-based signature, which achieves message authentication and maintains the anonymity of the vehicles. In order to reduce the computations of the OBUs in the vehicles, we outsource the heavy computations of encryption, decryption and signing to a cloud server and road-side units. The theoretical analysis and simulation results reveal that our secure and efficient scheme is suitable for VCC.
Secure data sharing scheme for VANETs based on edge computing The development of information technology and the abundance of problems related to vehicular traffic have led to extensive studies on vehicular ad hoc networks (VANETs) to meet various aspects of vehicles, including safety, efficiency, management, and entertainment. In addition to the security applications provided by VANETs, vehicles can take advantage of other services and users who have subscribed to multiple services can migrate between different wireless network areas. Traditionally, roadside units (RSUs) have been used by vehicles to enjoy cross-domain services. This results in significant delays and large loads on the RSUs. To solve these problems, this paper introduces a scheme to share data among different domains. First, a few vehicles called edge computing vehicles (ECVs) are selected to act as edge computing nodes in accordance with the concept of edge computing. Next, the data to be shared are forwarded by the ECVs to the vehicle that has requested the service. This method results in low latency and load on the RSUs. Meanwhile, ciphertext-policy attribute-based encryption and elliptic curve cryptography are used to ensure the confidentiality of the information.
Attribute-Based Encryption With Parallel Outsourced Decryption For Edge Intelligent Iov Edge intelligence is an emerging concept referring to processes in which data are collected and analyzed and insights are delivered close to where the data are captured in a network using a selection of advanced intelligent technologies. As a promising solution to solve the problems of insufficient computing capacity and transmission latency, the edge intelligence-empowered Internet of Vehicles (IoV) is being widely investigated in both academia and industry. However, data sharing security in edge intelligent IoV is a challenge that should be solved with priority. Although attribute-based encryption (ABE) is capable of addressing this challenge, many time-consuming modular exponential operations and bilinear pair operations as well as serial computing cause ABE to have a slow decryption speed. Consequently, it cannot address the response time requirement of edge intelligent IoV. Given this problem, an ABE model with parallel outsourced decryption for edge intelligent IoV, called ABEM-POD, is proposed. It includes a generic parallel outsourced decryption method for ABE based on Spark and MapReduce. This method is applicable to all ABE schemes with a tree access structure and can be applied to edge intelligent IoV. Any ABE scheme based on the proposed model not only supports parallel outsourced decryption but also has the same security as the original scheme. In this paper, ABEM-POD has been applied to three representative ABE schemes, and the experiments show that the proposed ABEM-POD is efficient and easy to use. This approach can significantly improve the speed of outsourced decryption to address the response time requirement for edge intelligent IoV.
Secure message classification services through identity-based signcryption with equality test towards the Internet of vehicles To provide a classification function that has the ability to promote the management of signcrypted messages transmitted by the numerous vehicles in the IoV system, in this paper, we construct an identity-based signcryption with equality test scheme (IBSC-ET) for the first time. Our scheme not only ensures the integrity, confidentiality as well as authentication of messages, but also enables the cloud server to execute the equality test between two ciphertexts signcrypted by the same or different public keys to decide whether the same plaintext is contained, which is counted as the essential factor contributing to the classification. Besides, the identity-based mechanism greatly improves the efficiency since the certificate management problem is avoided. Furthermore, the security of the introduced scheme can be proved in the random oracle model under the Computational Diffie-Hellman Assumption (CDHA) as well as the Bilinear Diffie-Hellman Assumption (BDHA). In the end, the feasibility and efficiency of our proposed are demonstrated through the performance analysis.
Fuzzy logic in control systems: fuzzy logic controller. I.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Optimization Of Radio And Computational Resources For Energy Efficiency In Latency-Constrained Application Offloading Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Noninterference for a Practical DIFC-Based Operating System The Flume system is an implementation of decentralized information flow control (DIFC) at the operating system level. Prior work has shown Flume can be implemented as a practical extension to the Linux operating system, allowing real Web applications to achieve useful security guarantees. However, the question remains if the Flume system is actually secure. This paper compares Flume with other recent DIFC systems like Asbestos, arguing that the latter is inherently susceptible to certain wide-bandwidth covert channels, and proving their absence in Flume by means of a noninterference proof in the communicating sequential processes formalism.
Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-Art In the nearly six decades since researchers began to explore methods of creating them, exoskeletons have progressed from the stuff of science fiction to nearly commercialized products. While there are still many challenges associated with exoskeleton development that have yet to be perfected, the advances in the field have been enormous. In this paper, we review the history and discuss the state-of-the-art of lower limb exoskeletons and active orthoses. We provide a design overview of hardware, actuation, sensory, and control systems for most of the devices that have been described in the literature, and end with a discussion of the major advances that have been made and hurdles yet to be overcome.
Magnetic, Acceleration Fields and Gyroscope Quaternion (MAGYQ)-based attitude estimation with smartphone sensors for indoor pedestrian navigation. The dependence of proposed pedestrian navigation solutions on a dedicated infrastructure is a limiting factor to the deployment of location based services. Consequently self-contained Pedestrian Dead-Reckoning (PDR) approaches are gaining interest for autonomous navigation. Even if the quality of low cost inertial sensors and magnetometers has strongly improved, processing noisy sensor signals combined with high hand dynamics remains a challenge. Estimating accurate attitude angles for achieving long term positioning accuracy is targeted in this work. A new Magnetic, Acceleration fields and GYroscope Quaternion (MAGYQ)-based attitude angles estimation filter is proposed and demonstrated with handheld sensors. It benefits from a gyroscope signal modelling in the quaternion set and two new opportunistic updates: magnetic angular rate update (MARU) and acceleration gradient update (AGU). MAGYQ filter performances are assessed indoors, outdoors, with dynamic and static motion conditions. The heading error, using only the inertial solution, is found to be less than 10 degrees after 1.5 km walking. The performance is also evaluated in the positioning domain with trajectories computed following a PDR strategy.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Observer Design For One-Sided Lipschitz Discrete-Time Switched Non-Linear Systems Under Asynchronous Switching The main objective of this paper is observer design for a class of non-linear switched systems, satisfying one-sided Lipschitz condition. The underlying systems may include a larger class of non-linearities with less conservativeness, compared with Lipschitz systems. By the assumption of synchronous switching, a Luenberger-like observer is first designed for state estimation, based on the average dwell time strategy. Then, under asynchronous switching, an observer is proposed with asymptotic convergence property. In addition to theoretical analysis and stability proof, the effectiveness of the proposed observers is presented and discussed via illustrative examples.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A recent survey of reversible watermarking techniques. The art of secretly hiding and communicating information has gained immense importance in the last two decades due to the advances in generation, storage, and communication technology of digital content. Watermarking is one of the promising solutions for tamper detection and protection of digital content. However, watermarking can cause damage to the sensitive information present in the cover work. Therefore, at the receiving end, the exact recovery of cover work may not be possible. Additionally, there exist certain applications that may not tolerate even small distortions in cover work prior to the downstream processing. In such applications, reversible watermarking instead of conventional watermarking is employed. Reversible watermarking of digital content allows full extraction of the watermark along with the complete restoration of the cover work. For the last few years, reversible watermarking techniques are gaining popularity because of its increasing applications in some important and sensitive areas, i.e., military communication, healthcare, and law-enforcement. Due to the rapid evolution of reversible watermarking techniques, a latest review of recent research in this field is highly desirable. In this survey, the performances of different reversible watermarking schemes are discussed on the basis of various characteristics of watermarking. However, the major focus of this survey is on prediction-error expansion based reversible watermarking techniques, whereby the secret information is hidden in the prediction domain through error expansion. Comparison of the different reversible watermarking techniques is provided in tabular form, and an analysis is carried out. Additionally, experimental comparison of some of the recent reversible watermarking techniques, both in terms of watermarking properties and computational time, is provided on a dataset of 300 images. Future directions are also provided for this potentially important field of watermarking.
The novel bilateral - Diffusion image encryption algorithm with dynamical compound chaos Chaos may be degenerated because of the finite precision effect, hence the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which are switched randomly. A new chaotic sequence generator is designed by the compound chaos which is proved by Devaney's definition of chaos. The properties of dynamical compound chaotic functions and LFSR are also proved rigorously. A novel bilateral-diffusion image encryption algorithm is proposed based on dynamical compound chaotic function and LFSR, which can produce more avalanche effect and more large key space. The entropy analysis, differential analysis, statistical analysis, cipher random analysis, and cipher sensitivity analysis are introduced to test the security of new scheme. Many experiment results show that the novel image encryption method passes SP 800-22 and DIEHARD standard tests and solves the problem of short cycle and low precision of one-dimensional chaotic function.
A new color image encryption scheme based on DNA sequences and multiple improved 1D chaotic maps A DNA-based color image encryption method is proposed by using three 1D chaotic systems with excellent performance and easy implementation.The key streams used for encryption are related to both the secret keys and the plain-image.To improve the security and sensitivity, a division-shuffling process is introduced.Transforming the plain-image and the key streams into the DNA matrices randomly can further enhance the security of the cryptosystem.The presented scheme has a good robustness for some common image processing operations and geometric attack. This paper proposes a new encryption scheme for color images based on Deoxyribonucleic acid (DNA) sequence operations and multiple improved one-dimensional (1D) chaotic systems with excellent performance. Firstly, the key streams are generated from three improved 1D chaotic systems by using the secret keys and the plain-image. Transform randomly the key streams and the plain-image into the DNA matrices by the DNA encoding rules, respectively. Secondly, perform the DNA complementary and XOR operations on the DNA matrices to get the scrambled DNA matrices. Thirdly, decompose equally the scrambled DNA matrices into blocks and shuffle these blocks randomly. Finally, implement the DNA XOR and addition operations on the DNA matrices obtained from the previous step and the key streams, and then convert the encrypted DNA matrices into the cipher-image by the DNA decoding rules. Experimental results and security analysis show that the proposed encryption scheme has a good encryption effect and high security. Moreover, it has a strong robustness for the common image processing operations and geometric attack.
A lightweight method of data encryption in BANs using electrocardiogram signal. Body area network (BAN) is a key technology of solving telemedicine, where protecting security of vital signs information becomes a very important technique requirement. The traditional encryption methods are not suitable for BANs due to the complex algorithm and the large consumption. This paper proposes a new encryption method based on the QRS complex of the ECG signal, which adopts the vital signs from the BAN system to form the initial key, utilizes the LFSR (Linear Feedback Shift Register) circuit to generate the key stream, and then encrypts the data in the BANs. This new encryption method has the advantages of low energy consumption, simple hardware implementation, and dynamic key updating.
Cryptanalysis of a DNA-based image encryption scheme •An image encryption scheme using 2D Hénon-Sine map and DNA coding is cracked.•We firstly use S-box to synthesize cryptographic effects of the involved DNA arithmetic.•Permutation vector and generalized s-boxes serve as equivalent secret elements.•Details of the encryption elements are retrieved by chosen-plaintext attack.•Experimental result and relative discussion are given for validation.
Watermarking techniques used in medical images: a survey. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. As a result of this, there is a need for medical image watermarking (MIW). However, MIW needs to be performed with special care for two reasons. Firstly, the watermarking procedure cannot compromise the quality of the image. Secondly, confidential patient information embedded within the image should be flawlessly retrievable without risk of error after image decompressing. Despite extensive research undertaken in this area, there is still no method available to fulfill all the requirements of MIW. This paper aims to provide a useful survey on watermarking and offer a clear perspective for interested researchers by analyzing the strengths and weaknesses of different existing methods.
Hybrid technique for robust and imperceptible multiple watermarking using medical images This paper presents a secure multiple watermarking method based on discrete wavelet transform (DWT), discrete cosine transforms (DCT) and singular value decomposition (SVD). For identity authentication purpose, the proposed method uses medical image as the image watermark, and the personal and medical record of the patient as the text watermark. In the embedding process, the cover medical image is decomposed up to second level of DWT coefficients. Low frequency band (LL) of the host medical image is transformed by DCT and SVD. The watermark medical image is also transformed by DCT and SVD. The singular value of watermark image is embedded in the singular value of the host image. Furthermore, the text watermark is embedding at the second level of the high frequency band (HH) of the host image. In order to enhance the security of the text watermark, encryption is applied to the ASCII representation of the text watermark before embedding. Results are obtained by varying the gain factor, size of the text watermark, and medical image modalities. Experimental results are provided to illustrate that the proposed method is able to withstand a variety of signal processing attacks such as JPEG, Gaussian, Salt-and-Pepper, Histogram equalization etc. The performance of the proposed technique is also evaluated by using the benchmark software Checkmark and the technique is found to be robust against the Checkmark attacks such as Collage, Trimmed Mean, Hard and Soft Thresholding, Wavelet Compression, Mid Point, Projective, and Wrap etc.
Rich Models for Steganalysis of Digital Images We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary $\\pm {\\hbox{1}}$ embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices. Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
The exploration/exploitation tradeoff in dynamic cellular genetic algorithms This paper studies static and dynamic decentralized versions of the search model known as cellular genetic algorithm (cGA), in which individuals are located in a specific topology and interact only with their neighbors. Making changes in the shape of such topology or in the neighborhood may give birth to a high number of algorithmic variants. We perform these changes in a methodological way by tuning the concept of ratio. Since the relationship (ratio) between the topology and the neighborhood shape defines the search selection pressure, we propose to analyze in depth the influence of this ratio on the exploration/exploitation tradeoff. As we will see, it is difficult to decide which ratio is best suited for a given problem. Therefore, we introduce a preprogrammed change of this ratio during the evolution as a possible additional improvement that removes the need of specifying a single ratio. A later refinement will lead us to the first adaptive dynamic kind of cellular models to our knowledge. We conclude that these dynamic cGAs have the most desirable behavior among all the evaluated ones in terms of efficiency and accuracy; we validate our results on a set of seven different problems of considerable complexity in order to better sustain our conclusions.
Human Shoulder Modeling Including Scapulo-Thoracic Constraint And Joint Sinus Cones In virtual human modeling, the shoulder is usually composed of clavicular, scapular and arm segments related by rotational joints. Although the model is improved, the realistic animation of the shoulder is hardly achieved. This is due to the fact that it is difficult to coordinate the simultaneous motion of the shoulder components in a consistent way. Also, the common use of independent one-degree of freedom (DOF) joint hierarchies does not properly render the 3-D accessibility space of real joints. On the basis of former biomechanical investigations, we propose here an extended shoulder model including scapulo-thoracic constraint and joint sinus cones. As a demonstration, the model is applied, using inverse kinematics, to the animation of a 3-D anatomic muscled skeleton model. (C) 2000 Elsevier Science Ltd. All rights reserved.
A review on interval type-2 fuzzy logic applications in intelligent control. A review of the applications of interval type-2 fuzzy logic in intelligent control has been considered in this paper. The fundamental focus of the paper is based on the basic reasons for using type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques.
A robust medical image watermarking against salt and pepper noise for brain MRI images. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. During the transmission of medical images between hospitals or specialists through the network, the main priority is to protect a patient's documents against any act of tampering by unauthorised individuals. Because of this, there is a need for medical image authentication scheme to enable proper diagnosis on patient. In addition, medical images are also susceptible to salt and pepper impulse noise through the transmission in communication channels. This noise may also be intentionally used by the invaders to corrupt the embedded watermarks inside the medical images. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this work addresses the issue of designing a new watermarking method that can withstand high density of salt and pepper noise for brain MRI images. For this purpose, combination of a spatial domain watermarking method, channel coding and noise filtering schemes are used. The region of non-interest (RONI) of MRI images from five different databases are used as embedding area and electronic patient record (EPR) is considered as embedded data. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER).
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.044537
0.04
0.04
0.04
0.04
0.010266
0.002714
0.000767
0
0
0
0
0
0
Designing adaptive humanoid robots through the FARSA open-source framework We introduce FARSA, an open-source Framework for Autonomous Robotics Simulation and Analysis, that allows us to easily set up and carry on adaptive experiments involving complex robot/environmental models. Moreover, we show how a simulated iCub robot can be trained, through an evolutionary algorithm, to display reaching and integrated reaching and grasping behaviours. The results demonstrate how the use of an implicit selection criterion, estimating the extent to which the robot is able to produce the expected outcome without specifying the manner through which the action should be realized, is sufficient to develop the required capabilities despite the complexity of the robot and of the task.
Fast learning neural networks using Cartesian genetic programming A fast learning neuroevolutionary algorithm for both feedforward and recurrent networks is proposed. The method is inspired by the well known and highly effective Cartesian genetic programming (CGP) technique. The proposed method is called the CGP-based Artificial Neural Network (CGPANN). The basic idea is to replace each computational node in CGP with an artificial neuron, thus producing an artificial neural network. The capabilities of CGPANN are tested in two diverse problem domains. Firstly, it has been tested on a standard benchmark control problem: single and double pole for both Markovian and non-Markovian cases. Results demonstrate that the method can generate effective neural architectures in substantially fewer evaluations in comparison to previously published neuroevolutionary techniques. In addition, the evolved networks show improved generalization and robustness in comparison with other techniques. Secondly, we have explored the capabilities of CGPANNs for the diagnosis of Breast Cancer from the FNA (Finite Needle Aspiration) data samples. The results demonstrate that the proposed algorithm gives 99.5% accurate results, thus making it an excellent choice for pattern recognitions in medical diagnosis, owing to its properties of fast learning and accuracy. The power of a CGP based ANN is its representation which leads to an efficient evolutionary search of suitable topologies. This opens new avenues for applying the proposed technique to other linear/non-linear and Markovian/non-Markovian control and pattern recognition problems.
An Effective Clustering Approach with Data Aggregation Using Multiple Mobile Sinks for Heterogeneous WSN Wireless Sensor Networks (WSNs) mostly uses static sink to collect data from the sensor nodes randomly deployed in the sensor region. In the static sink based approach, the data packets are flooded across the network to reach the mobile base station in multi-hop communication. Due to this, the static sink is inefficient in energy utilization. Recently, mobile sink are used for data gathering, has less energy utilization which in turn increases the network lifetime. Thus, the sink mobility has difficulties in finding the routing path for the data packets. This paper proposes an effective clustering approach with data aggregation using multiple mobile sinks for heterogeneous WSN. The proposed algorithm achieves network lifetime increases with limited energy utilization.
Improving reporting delay and lifetime of a WSN using controlled mobile sinks Wireless sensor networks (WSNs) are characterized by many to one traffic pattern, where a large number of nodes communicate their sensed data to the sink node. Due to heavy data traffic near the sink node, the nodes closer to sink node tends to exhaust their energy faster compared to those nodes which are situated away from the sink. This may lead to the fragment of a network due to the early demise of sensor nodes situated closer to the sink. To pacify this problem, mobile sinks are proposed for WSNs. Mobile sinks are capable to provide uniform energy consumption, load distribution, low reporting delay and quick data delivery paths. However, the position of the mobile sink needs to be updated regularly as such position update messages may reduce the network lifetime. In this paper, we propose a novel Location Aware Routing for Controlled Mobile Sinks (LARCMS), which will help in minimizing reporting delay, enhancing network lifetime, handling sink position updates and providing uniform energy consumption. The proposed technique uses two mobile sinks in predefined trajectory for data collection and provides better results compared to existing techniques. The performance of LARCMS is evaluated by comparing with similar mobile sink routing protocols through extensive simulations in MATLAB.
Multiobjective Evolution of Fuzzy Rough Neural Network via Distributed Parallelism for Stock Prediction Fuzzy rough theory can describe real-world situations in a mathematically effective and interpretable way, while evolutionary neural networks can be utilized to solve complex problems. Combining them with these complementary capabilities may lead to evolutionary fuzzy rough neural network with the interpretability and prediction capability. In this article, we propose modifications to the existing models of fuzzy rough neural network and then develop a powerful evolutionary framework for fuzzy rough neural networks by inheriting the merits of both the aforementioned systems. We first introduce rough neurons and enhance the consequence nodes, and further integrate the interval type-2 fuzzy set into the existing fuzzy rough neural network model. Thus, several modified fuzzy rough neural network models are proposed. While simultaneously considering the objectives of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">prediction precision</italic> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">network simplicity</italic> , each model is transformed into a multiobjective optimization problem by encoding the structure, membership functions, and the parameters of the network. To solve these optimization problems, distributed parallel multiobjective evolutionary algorithms are proposed. We enhance the optimization processes with several measures including optimizer replacement and parameter adaption. In the distributed parallel environment, the tedious and time-consuming neural network optimization can be alleviated by numerous computational resources, significantly reducing the computational time. Through experimental verification on complex stock time series prediction tasks, the proposed optimization algorithms and the modified fuzzy rough neural network models exhibit significant improvements the existing fuzzy rough neural network and the long short-term memory network.
Weighted Rendezvous Planning on Q-Learning Based Adaptive Zone Partition with PSO Based Optimal Path Selection Nowadays, wireless sensor network (WSN) has emerged as the most developed research area. Different research have been demonstrated for reducing the sensor nodes’ energy consumption with mobile sink in WSN. But, such approaches were dependent on the path selected by the mobile sink since all sensed data should be gathered within the given time constraint. Therefore, in this article, the issue of an optimal path selection is solved when multiple mobile sinks are considered in WSN. In the initial stage, Q-learning based Adaptive Zone Partition method is applied to split the network into smaller zones. In each zone, the location and residual energy of nodes are transmitted to the mobile sinks through Mobile Anchor. Moreover, Weighted Rendezvous Planning is proposed to assign a weight to every node according to its hop distance. The collected data packets are transmitted to the mobile sink node within the given delay bound by means of a designated set of rendezvous points (RP). Then, an optimal path from RP to mobile sink is selected utilizing the particle swarm optimization algorithm which is applied during routing process. Experimental results demonstrated the effectiveness of the proposed approach where the network lifetime is increased by the reduction of energy consumption in multihop transmission.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
An introduction to ROC analysis Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Towards Next-Generation Vehicles Featuring the Vehicle Intelligence Safe driving and minimizing the number of casualties are the main motivations of researchers and car companies for decades. They also care very much on saving fuel consumption and high comfort-level trips. With the help of advanced driver assistance systems (ADAS) applications, safer, more comfortable, and greener trips are very likely at the present time. However, today humankind is very close to make a very old dream, namely, driverless vehicles, to come true. In this paper, we address the concept of next-generation vehicles, their requirements, challenges, advantages, and problems. Regarding Society of Automotive Engineers (SAE), levels (1–5), we first define the crucial contexts for next-generation vehicles. We then discuss existing ADAS, their abilities, and available platforms which they run on from past to present. Next, we introduce a novel vehicle intelligence (VI) architecture consisting of ADAS modules and VI services which would pave the way for fully autonomous vehicles regarding not only driving issue, but also human-centric new demands such as entertainment and comfort level of the journey. The proposed conceptual design is built on sensors, vehicle ad hoc networks (VANETs), and big data. Afterward, we describe how current ADAS applications would transform on the way toward SAE Level 5 cars. We finally discuss the open issues for next-generation vehicles.
Analysing user physiological responses for affective video summarisation. Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.
On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes Driver behavioral cues may present a rich source of information and feedback for future intelligent advanced driver-assistance systems (ADASs). With the design of a simple and robust ADAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. Eye gaze may provide a more accurate proxy than head movement for determining driver attention, whereas the measurement of head motion is less cumbersome and more reliable in harsh driving conditions. We use a lane-change intent-prediction system (McCall et al., 2007) to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane-change maneuver at a particular point in the future. Quantitative results from a naturalistic driving study are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction. The addition of eye gaze does not improve performance as much as simpler head dynamics cues. The advantage of head data over eye data is shown to be statistically significant (p
Detection of Driver Fatigue Caused by Sleep Deprivation This paper aims to provide reliable indications of driver drowsiness based on the characteristics of driver-vehicle interaction. A test bed was built under a simulated driving environment, and a total of 12 subjects participated in two experiment sessions requiring different levels of sleep (partial sleep-deprivation versus no sleep-deprivation) before the experiment. The performance of the subjects was analyzed in a series of stimulus-response and routine driving tasks, which revealed the performance differences of drivers under different sleep-deprivation levels. The experiments further demonstrated that sleep deprivation had greater effect on rule-based than on skill-based cognitive functions: when drivers were sleep-deprived, their performance of responding to unexpected disturbances degraded, while they were robust enough to continue the routine driving tasks such as lane tracking, vehicle following, and lane changing. In addition, we presented both qualitative and quantitative guidelines for designing drowsy-driver detection systems in a probabilistic framework based on the paradigm of Bayesian networks. Temporal aspects of drowsiness and individual differences of subjects were addressed in the framework.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
3D separable convolutional neural network for dynamic hand gesture recognition. •The Frame Difference method is used to pre-process the input in order to filter the background.•A 3D separable CNN is proposed for dynamic gesture recognition. The standard 3D convolution process is decomposed into two processes: 3D depth-wise and 3D point-wise.•By the application of skip connection and layer-wise learning rate, the undesirable gradient dispersion due to the separation operation is solved and the performance of the network is improved.•A dynamic hand gesture library is built through HoloLens.
Deep convolutional neural network-based Bernoulli heatmap for head pose estimation Head pose estimation is a crucial problem for many tasks, such as driver attention, fatigue detection, and human behaviour analysis. It is well known that neural networks are better at handling classification problems than regression problems. It is an extremely nonlinear process to let the network output the angle value directly for optimization learning, and the weight constraint of the loss function will be relatively weak. This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image. Our method can achieve the positioning of the head area while estimating the angles of the head. The Bernoulli heatmap makes it possible to construct fully convolutional neural networks without fully connected layers and provides a new idea for the output form of head pose estimation. A deep convolutional neural network (CNN) structure with multiscale representations is adopted to maintain high-resolution information and low-resolution information in parallel. This kind of structure can maintain rich, high-resolution representations. In addition, channelwise fusion is adopted to make the fusion weights learnable instead of simple addition with equal weights. As a result, the estimation is spatially more precise and potentially more accurate. The effectiveness of the proposed method is empirically demonstrated by comparing it with other state-of-the-art methods on public datasets.
Reinforcement learning based data fusion method for multi-sensors In order to improve detection system robustness and reliability, multi-sensors fusion is used in modern air combat. In this paper, a data fusion method based on reinforcement learning is developed for multi-sensors. Initially, the cubic B-spline interpolation is used to solve time alignment problems of multisource data. Then, the reinforcement learning based data fusion (RLBDF) method is proposed to obtain the fusion results. With the case that the priori knowledge of target is obtained, the fusion accuracy reinforcement is realized by the error between fused value and actual value. Furthermore, the Fisher information is instead used as the reward if the priori knowledge is unable to be obtained. Simulations results verify that the developed method is feasible and effective for the multi-sensors data fusion in air combat.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning The literature on short-term traffic flow forecasting has undergone great development recently. Many works, describing a wide variety of different approaches, which very often share similar features and ideas, have been published. However, publications presenting new prediction algorithms usually employ different settings, data sets, and performance measurements, making it difficult to infer a clear picture of the advantages and limitations of each model. The aim of this paper is twofold. First, we review existing approaches to short-term traffic flow forecasting methods under the common view of probabilistic graphical models, presenting an extensive experimental comparison, which proposes a common baseline for their performance analysis and provides the infrastructure to operate on a publicly available data set. Second, we present two new support vector regression models, which are specifically devised to benefit from typical traffic flow seasonality and are shown to represent an interesting compromise between prediction accuracy and computational efficiency. The SARIMA model coupled with a Kalman filter is the most accurate model; however, the proposed seasonal support vector regressor turns out to be highly competitive when performing forecasts during the most congested periods.
TSCA: A Temporal-Spatial Real-Time Charging Scheduling Algorithm for On-Demand Architecture in Wireless Rechargeable Sensor Networks. The collaborative charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles (WCVs) to sensors, providing a new paradigm to prolong network lifetime. Existing techniques on collaborative charging usually take the periodical and deterministic approach, but neglect influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for large-scale WRSNs. In this paper, we develop a temporal-spatial charging scheduling algorithm, namely TSCA, for the on-demand charging architecture. We aim to minimize the number of dead nodes while maximizing energy efficiency to prolong network lifetime. First, after gathering charging requests, a WCV will compute a feasible movement solution. A basic path planning algorithm is then introduced to adjust the charging order for better efficiency. Furthermore, optimizations are made in a global level. Then, a node deletion algorithm is developed to remove low efficient charging nodes. Lastly, a node insertion algorithm is executed to avoid the death of abandoned nodes. Extensive simulations show that, compared with state-of-the-art charging scheduling algorithms, our scheme can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.
A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
On Theoretical Modeling of Sensor Cloud: A Paradigm Shift From Wireless Sensor Network. This paper focuses on the theoretical modeling of sensor cloud, which is one of the first attempts in this direction. We endeavor to theoretically characterize virtualization, which is a fundamental mechanism for operations within the sensor-cloud architecture. Existing related research works on sensor cloud have primarily focused on the ideology and the challenges that wireless sensor network (WS...
An Energy-Balanced Heuristic for Mobile Sink Scheduling in Hybrid WSNs. Wireless sensor networks (WSNs) are integrated as a pillar of collaborative Internet of Things (IoT) technologies for the creation of pervasive smart environments. Generally, IoT end nodes (or WSN sensors) can be mobile or static. In this kind of hybrid WSNs, mobile sinks move to predetermined sink locations to gather data sensed by static sensors. Scheduling mobile sinks energy-efficiently while ...
An energy-efficient path determination strategy for mobile data collectors in wireless sensor network. In wireless sensor networks, introduction of mobility has been considered to be a good strategy to greatly reduce the energy dissipation of the static sensor nodes. This task is achieved by considering the path in which the mobile data collectors move to collect data from the sensors. In this work a data gathering approach is proposed in which some mobile collectors visit only certain sojourn points (SPs) or data collection points in place of all sensor nodes. The mobile collectors start out on their journey after gathering information about the network from the sink, gather data from the sensors and transfer the data to the sink. To address this problem, an algorithm named Mobile Collector Path Planning (MCPP) is proposed. MCPP schema is validated via computer simulation considering both obstacle free and obstacle-resisting network and based on metrics like energy consumption by the static sensor nodes and network life time. The simulation results show a reduction of about 12% in energy consumption and 15% improvement in network lifetime as compared with existing algorithms.
Big Data Cleaning Based on Mobile Edge Computing in Industrial Sensor-Cloud. With the advent of 5G, the industrial Internet of Things has developed rapidly. The industrial sensor-cloud system (SCS) has also received widespread attention. In the future, a large number of integrated sensors that simultaneously collect multifeature data will be added to industrial SCS. However, the collected big data are not trustworthy due to the harsh environment of the sensor. If the data ...
Latency-Aware Path Planning for Disconnected Sensor Networks With Mobile Sinks Data collection with mobile elements can greatly improve the load balance degree and accordingly prolong the longevity for wireless sensor networks (WSNs). In this pattern, a mobile sink generally traverses the sensing field periodically and collect data from multiple Anchor Points (APs) which constitute a traveling tour. However, due to long-distance traveling, this easily causes large latency of data delivery. In this paper, we propose a path planning strategy of mobile data collection, called the Dual Approximation of Anchor Points (DAAP), which aims to achieve full connectivity for partitioned WSNs and construct a shorter path. DAAP is novel in two aspects. On the one hand, it is especially designed for disconnected WSNs where sensor nodes are scattered in multiple isolated segments. On the other hand, it has the least calculational complexity compared with other existing works. DAAP is formulated as a location approximation problem and then solved by a greedy location selection mechanism, which follows two corresponding principles. On the one hand, the APs of periphery segments must be as near the network center as possible. On the other hand, the APs of other isolated segments must be as close to the current path as possible. Finally, experimental results confirm that DAAP outperforms existing works in delay-tough applications.
Survey of Fog Computing: Fundamental, Network Applications, and Research Challenges. Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog n...
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
An online mechanism for multi-unit demand and its application to plug-in hybrid electric vehicle charging We develop an online mechanism for the allocation of an expiring resource to a dynamic agent population. Each agent has a non-increasing marginal valuation function for the resource, and an upper limit on the number of units that can be allocated in any period. We propose two versions on a truthful allocation mechanism. Each modifies the decisions of a greedy online assignment algorithm by sometimes cancelling an allocation of resources. One version makes this modification immediately upon an allocation decision while a second waits until the point at which an agent departs the market. Adopting a prior-free framework, we show that the second approach has better worst-case allocative efficiency and is more scalable. On the other hand, the first approach (with immediate cancellation) may be easier in practice because it does not need to reclaim units previously allocated. We consider an application to recharging plug-in hybrid electric vehicles (PHEVs). Using data from a real-world trial of PHEVs in the UK, we demonstrate higher system performance than a fixed price system, performance comparable with a standard, but non-truthful scheduling heuristic, and the ability to support 50% more vehicles at the same fuel cost than a simple randomized policy.
Blockchain Meets IoT: An Architecture for Scalable Access Management in IoT. The Internet of Things (IoT) is stepping out of its infancy into full maturity and establishing itself as a part of the future Internet. One of the technical challenges of having billions of devices deployed worldwide is the ability to manage them. Although access management technologies exist in IoT, they are based on centralized models which introduce a new variety of technical limitations to ma...
Multi-column Deep Neural Networks for Image Classification Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
Segmentation-Based Image Copy-Move Forgery Detection Scheme In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Genetic Optimization Of Radial Basis Probabilistic Neural Networks This paper discusses using genetic algorithms (CA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed CA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed CA is also suitable for optimizing the radial basis function neural networks (RBFNN).
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
A novel data hiding for color images based on pixel value difference and modulus function This paper proposes a novel data hiding method using pixel-value difference and modulus function for color image with the large embedding capacity(hiding 810757 bits in a 512 512 host image at least) and a high-visual-quality of the cover image. The proposed method has fully taken into account the correlation of the R, G and B plane of a color image. The amount of information embedded the R plane and the B plane determined by the difference of the corresponding pixel value between the G plane and the median of G pixel value in each pixel block. Furthermore, two sophisticated pixel value adjustment processes are provided to maintain the division consistency and to solve underflow and overflow problems. The most importance is that the secret data are completely extracted through the mathematical theoretical proof.
High payload image steganography with reduced distortion using octonary pixel pairing scheme The crucial challenge that decides the success of any steganographic algorithm lies in simultaneously achieving the three contradicting objectives namely--higher payload capacity, with commendable perceptual quality and high statistical un-detectability. This work is motivated by the interest in developing such a steganographic scheme, which is aimed for establishing secure image covert channel in spatial domain using Octonary PVD scheme. The goals of this paper are to be realized through: (1) pairing a pixel with all of its neighbors in all the eight directions, to offer larger embedding capacity (2) the decision of the number of bits to be embedded in each pixel based on the nature of its region and not done universally same for all the pixels, to enhance the perceptual quality of the images (3) the re-adjustment phase, which sustains any modified pixel in the same level in the stego-image also, where the difference between a pixel and its neighbor in the cover image belongs to, for imparting the statistical un-detectability factor. An extensive experimental evaluation to compare the performance of the proposed system vs. other existing systems was conducted, on a database containing 3338 natural images, against two specific and four universal steganalyzers. The observations reported that the proposed scheme is a state-of-the-art model, offering high embedding capacity while concurrently sustaining the picture quality and defeating the statistical detection through steganalyzers.
Improved diagonal queue medical image steganography using Chaos theory, LFSR, and Rabin cryptosystem. In this article, we have proposed an improved diagonal queue medical image steganography for patient secret medical data transmission using chaotic standard map, linear feedback shift register, and Rabin cryptosystem, for improvement of previous technique (Jain and Lenka in Springer Brain Inform 3:39-51, 2016). The proposed algorithm comprises four stages, generation of pseudo-random sequences (pseudo-random sequences are generated by linear feedback shift register and standard chaotic map), permutation and XORing using pseudo-random sequences, encryption using Rabin cryptosystem, and steganography using the improved diagonal queues. Security analysis has been carried out. Performance analysis is observed using MSE, PSNR, maximum embedding capacity, as well as by histogram analysis between various Brain disease stego and cover images.
Dual hybrid medical watermarking using walsh-slantlet transform A hybrid robust lossless data hiding algorithm is proposed in this paper by using the Singular Value Decomposition (SVD) with Fast Walsh Transform (FWT) and Slantlet Transform (SLT) for image authentication. These transforms possess good energy compaction with distinct filtering, which leads to higher embedding capacity from 1.8 bit per pixel (bpp) up to 7.5bpp. In the proposed algorithm, Artificial Neural Network (ANN) is applied for region of interest (ROI) detection and two different watermarks are created. Embedding is done after applying FWH by changing the SVD coefficients and by changing the highest coefficients of SLT subbands. In dual hybrid embedding first watermark is the ROI and another watermark consists of three parts, i.e., patients’ personal details, unique biometric ID and the key for encryption. Comparison of the proposed algorithm is done with the existing watermarking techniques for analyzing the performance. Experiments are simulated on the proposed algorithm by casting numerous attacks for testing the visibility, robustness, security, authenticity, integrity and reversibility. The resultant outcome proves that the watermarked image has an improved imperceptibility with a high level of payload, low time complexity and high Peak Signal to Noise Ratio (PSNR) against the existing approaches.
Multiscale Transform-Based Secured Joint Efficient Medical Image Compression-Encryption Using Symmetric Key Cryptography And Ebcot Encoding Technique Due to the huge advancement in technology, digitizing the multimedia content like text, images and videos has become easier. Everyday huge amounts of multimedia content are shared through the social networks using internet. Sometimes this multimedia content can be hacked by the hackers. This will lead to the misuse of the data. On the other hand, the medical content needs high security and privacy. Motivated by this, joint secured medical image compression-encryption mechanisms are proposed in this paper using multiscale transforms and symmetric key encryption techniques. The multiscale transforms involved in this paper are wavelet transform, bandelet transform and curvelet transform. The encryption techniques involved in this paper are international data encryption algorithm (IDEA), Rivest Cipher (RC5) and Blowfish. The encoding technique used in this paper is embedded block coding with truncation (EBCOT). Experimental results are done for the proposed works and evaluated by using various parameters like Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), Image Quality Index (IQI) and Structural Similarity Index (SSIM), Average Difference (AD), Normalized Cross-correlation (NK), Structural Content (SC), Maximum difference (MD), Laplacian Mean Squared Error (LMSE) and Normalized Absolute Error (NAE). It is justified that the proposed approaches in this paper yield good results.
A New Efficient Medical Image Cipher Based On Hybrid Chaotic Map And Dna Code In this paper, we propose a novel medical image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and a new hybrid chaotic map. Our study uses DNA sequences and operations and the chaotic hybrid map to strengthen the cryptosystem. The significant advantages of this approach consist in improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext, has a large key space and a high key sensitivity, and is able to resist against the statistical and exhaustive attacks.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Assist-As-Needed Training Paradigms For Robotic Rehabilitation Of Spinal Cord Injuries This paper introduces a new "assist-as-needed" (AAN) training paradigm for rehabilitation of spinal cord injuries via robotic training devices. In the pilot study reported in this paper, nine female adult Swiss-Webster mice were divided into three groups, each experiencing a different robotic training control strategy: a fixed training trajectory (Fixed Group, A), an AAN training method without interlimb coordination (Band Group, B), and an AAN training method with bilateral hindlimb coordination (Window Group, C). Fourteen days after complete transection at the mid-thoracic level, the mice were robotically trained to step in the presence of an acutely administered serotonin agonist, quipazine, for a period of six weeks. The mice that received AAN training (Groups B and C) show higher levels of recovery than Group A mice, as measured by the number, consistency, and periodicity of steps realized during testing sessions. Group C displays a higher incidence of alternating stepping than Group B. These results indicate that this training approach may be more effective than fixed trajectory paradigms in promoting robust post-injury stepping behavior. Furthermore, the constraint of interlimb coordination appears to be an important contribution to successful training.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
Application of Information Processes Applicative Modelling to Virtual Machines Auto Configuration. The paper discusses application of the algorithm based information processes modeling and Q-machine learning to the task of virtual machines auto configuration. Usage of virtual machines is one the most common solution nowadays for almost every companies. Although using of virtual machines simplifies settings of hardware landscape and allows physical servers decentralization they may cause performance decrease. To overcome this issue, we propose using method of virtual machines auto configuration. The method is based on the following premises: tracking real business process held by virtual machines, using algorithms of machine learning to investigate optimal configuration, and virtual machine configuration by independent process.
Infrastructure as a Service and Cloud Technologies To choose the most appropriate cloud-computing model for your organization, you must analyze your IT infrastructure, usage, and needs. To help with this, this article describes cloud computing's current status.
Dynamic Management of Virtual Infrastructures Cloud infrastructures are becoming an appropriate solution to address the computational needs of scientific applications. However, the use of public or on-premises Infrastructure as a Service (IaaS) clouds requires users to have non-trivial system administration skills. Resource provisioning systems provide facilities to choose the most suitable Virtual Machine Images (VMI) and basic configuration of multiple instances and subnetworks. Other tasks such as the configuration of cluster services, computational frameworks or specific applications are not trivial on the cloud, and normally users have to manually select the VMI that best fits, including undesired additional services and software packages. This paper presents a set of components that ease the access and the usability of IaaS clouds by automating the VMI selection, deployment, configuration, software installation, monitoring and update of Virtual Appliances. It supports APIs from a large number of virtual platforms, making user applications cloud-agnostic. In addition it integrates a contextualization system to enable the installation and configuration of all the user required applications providing the user with a fully functional infrastructure. Therefore, golden VMIs and configuration recipes can be easily reused across different deployments. Moreover, the contextualization agent included in the framework supports horizontal (increase/decrease the number of resources) and vertical (increase/decrease resources within a running Virtual Machine) by properly reconfiguring the software installed, considering the configuration of the multiple resources running. This paves the way for automatic virtual infrastructure deployment, customization and elastic modification at runtime for IaaS clouds.
Virtual machine placement quality estimation in cloud infrastructures using integer linear programming This paper is devoted to the quality estimation of virtual machine (VM) placement in cloud infrastructures, i.e., to choose the best hosts for a given set of VMs. We focus on test generation and monitoring techniques for comparing the placement result of a given implementation with an optimal solution with respect to given criteria. We show how Integer Linear Programming problems can be formulated and utilized for deriving test suites and optimal solutions to provide verdicts concerning the quality of VM placement implementations; the quality is calculated as the distance from an optimal placement for a given criterion (or a set of criteria). The presented approach is generic and showcased on resource utilization, energy consumption, and resource over-commitment cost. Experiments performed with different VM placement algorithms (including the VM placement algorithms implemented in widely used platforms, such as OpenStack) exhibit the competence of such algorithms with respect to different criteria.
Integrating security and privacy in software development As a consequence to factors such as progress made by the attackers, release of new technologies and use of increasingly complex systems, and threats to applications security have been continuously evolving. Security of code and privacy of data must be implemented in both design and programming practice to face such scenarios. In such a context, this paper proposes a software development approach, Privacy Oriented Software Development (POSD), that complements traditional development processes by integrating the activities needed for addressing security and privacy management in software systems. The approach is based on 5 key elements (Privacy by Design, Privacy Design Strategies, Privacy Pattern, Vulnerabilities, Context). The approach can be applied in two directions forward and backward, for developing new software systems or re-engineering an existing one. This paper presents the POSD approach in the backward mode together with an application in the context of an industrial project. Results show that POSD is able to discover software vulnerabilities, identify the remediation patterns needed for addressing them in the source code, and design the target architecture to be used for guiding privacy-oriented system re-engineering.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Vision meets robotics: The KITTI dataset We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Adapting visual category models to new domains Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Adaptive Consensus Control for a Class of Nonlinear Multiagent Time-Delay Systems Using Neural Networks Because of the complicity of consensus control of nonlinear multiagent systems in state time-delay, most of previous works focused only on linear systems with input time-delay. An adaptive neural network (NN) consensus control method for a class of nonlinear multiagent systems with state time-delay is proposed in this paper. The approximation property of radial basis function neural networks (RBFNNs) is used to neutralize the uncertain nonlinear dynamics in agents. An appropriate Lyapunov-Krasovskii functional, which is obtained from the derivative of an appropriate Lyapunov function, is used to compensate the uncertainties of unknown time delays. It is proved that our proposed approach guarantees the convergence on the basis of Lyapunov stability theory. The simulation results of a nonlinear multiagent time-delay system and a multiple collaborative manipulators system show the effectiveness of the proposed consensus control algorithm.
5G Virtualized Multi-access Edge Computing Platform for IoT Applications. The next generation of fifth generation (5G) network, which is implemented using Virtualized Multi-access Edge Computing (vMEC), Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, is a flexible and resilient network that supports various Internet of Things (IoT) devices. While NFV provides flexibility by allowing network functions to be dynamically deployed and inter-connected, vMEC provides intelligence at the edge of the mobile network reduces latency and increases the available capacity. With the diverse development of networking applications, the proposed vMEC use of Container-based Virtualization Technology (CVT) as gateway with IoT devices for flow control mechanism in scheduling and analysis methods will effectively increase the application Quality of Service (QoS). In this work, the proposed IoT gateway is analyzed. The combined effect of simultaneously deploying Virtual Network Functions (VNFs) and vMEC applications on a single network infrastructure, and critically in effecting exhibits low latency, high bandwidth and agility that will be able to connect large scale of devices. The proposed platform efficiently exploiting resources from edge computing and cloud computing, and takes IoT applications that adapt to network conditions to degrade an average 30% of end to end network latency.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Multiple switching-time-dependent discretized Lyapunov functions/functionals methods for stability analysis of switched time-delay stochastic systems. This paper presents novel approaches for stability analysis of switched linear time-delay stochastic systems under dwell time constraint. Instead of using comparison principle, piecewise switching-time-dependent discretized Lyapunov functions/functionals are introduced to analyze the stability of switched stochastic systems with constant or time-varying delays. These Lyapunov functions/functionals are decreasing during the dwell time and non-increasing at switching instants, which lead to two mode-dependent dwell-time-based delay-independent stability criteria for the switched systems without restricting the stability of the subsystems. Comparison and numerical examples are provided to show the efficiency of the proposed results.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Adaptive Human–Robot Interaction Control for Robots Driven by Series Elastic Actuators Series elastic actuators (SEAs) are known to offer a range of advantages over stiff actuators for human–robot interaction, such as high force/torque fidelity, low impedance, and tolerance to shocks. While a variety of SEAs have been developed and implemented in initiatives that involve physical interactions with humans, relatively few control schemes were proposed to deal with the dynamic stability and uncertainties of robotic systems driven by SEAs, and the open issue of safety that resolves the conflicts of motion between the human and the robot has not been systematically addressed. In this paper, a novel continuous adaptive control method is proposed for SEA-driven robots used in human–robot interaction. The proposed method provides a unified formulation for both the robot-in-charge mode, where the robot plays a dominant role to follow a desired trajectory, and the human-in-charge mode, in which the human plays a dominant role to guide the movement of robot. Instead of designing multiple controllers and switching between them, both typical modes are integrated into a single controller, and the transition between two modes is smooth and stable. Therefore, the proposed controller is able to detect the human motion intention and guarantee the safe human–robot interaction. The dynamic stability of the closed-loop system is theoretically proven by using the Lyapunov method, with the consideration of uncertainties in both the robot dynamics and the actuator dynamics. Both simulation and experimental results are presented to illustrate the performance of the proposed controller.
Complementary Stability and Loop Shaping for Improved Human–Robot Interaction Robots intended for high-force interaction with humans face particular challenges to achieve performance and stability. They require low and tunable endpoint impedance as well as high force capacity, and demand actuators with low intrinsic impedance, the ability to exhibit high impedance (relative to the human subject), and a high ratio of force to weight. Force-feedback control can be used to improve actuator performance, but causes well-known interaction stability problems. This paper presents a novel method to design actuator controllers for physically interactive machines. A loop-shaping design method is developed from a study of fundamental differences between interaction control and the more common servo problem. This approach addresses the interaction problem by redefining stability and performance, using a computational approach to search parameter spaces and displaying variations in performance as control parameters are adjusted. A measure of complementary stability is introduced, and the coupled stability problem is transformed to a robust stability problem using limited knowledge of the environment dynamics (in this case, the human). Design examples show that this new measure improves performance beyond the current best-practice stability constraint (passivity). The controller was implemented on an interactive robot, verifying stability and performance. Testing showed that the new controller out-performed a state-of-the-art controller on the same system
An Ankle-Foot Emulation System For The Study Of Human Walking Biomechanics Although below-knee prostheses have been commercially available for some time, today's devices are completely passive, and consequently, their mechanical properties remain foxed with walking speed and terrain. A lack of understanding of the ankle foot biomechanics and the dynamic interaction between an amputee and a prosthesis is one of the main obstacles in the development of a biomimetic ankle foot prosthesis. In this paper, we present a novel ankle foot emulator system for the study of human walking biomechanics. The emulator system is comprised of a high performance, force-controllable, robotic ankle-foot worn by an amputee interfaced to a mobile computing unit secured around his waist We show that the system is capable of mimicking normal ankle foot walking behaviour. An initial pilot study supports the hypothesis that the emulator may provide a more natural gait than a conventional passive prosthesis.
Optimal robust filtering for systems subject to uncertainties In this paper we deal with an optimal filtering problem for uncertain discrete-time systems. Parametric uncertainties of the underlying model are assumed to be norm bounded. We propose an approach based on regularization and penalty function to solve this problem. The optimal robust filter with the respective recursive Riccati equation is written through unified frameworks defined in terms of matrix blocks. These frameworks do not depend on any auxiliary parameters to be tuned. Simulation results show the effectiveness of the robust filter proposed.
Serious games for assessment and rehabilitation of ankle movements This paper presents a set of serious games for assessment and rehabilitation of pos-stroke patients. The proposed games are connected to a robotic platform for ankle rehabilitation, which allows the patients to perform tasks regarding dorsiflexion movements and muscle strength. An usability study fo the robotic platform was conducted in 19 hemiparetic and 19 healthy subjects, with the aim of evaluating ergonomics issues, safety, level of difficulty of the games, and platform ability to measure subjects' dorsiflexion range of motion and torque. Results from both games are presented and discussed.
Periodic Event-Triggered Suboptimal Control With Sampling Period and Performance Analysis In this paper, the periodic event-triggered suboptimal control (PETSOC) method is developed for continuous-time linear systems. Different from event-triggered control, where the triggering condition is monitored continuously, the developed PETSOC method only verifies the triggering condition periodically at sampling instants, which further reduces computational resources. First, the control gain of the PETSOC is designed based on the algebraic Riccati equation. Subsequently, the periodic event-triggering condition is proposed for the suboptimal control method, which is only verified at sampling instants periodically. The sampling period is determined and analyzed based on the continuous form of the triggering condition. Moreover, the stability and the performance upper bound of the closed-loop system with the PETSOC are proved. Finally, the effectiveness of the developed PETSOC is validated through simulation on an unstable batch reactor.
Adaptive Optimal Control of Unknown Constrained-Input Systems Using Policy Iteration and Neural Networks This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.
Air-Breathing Hypersonic Vehicle Tracking Control Based on Adaptive Dynamic Programming In this paper, we propose a data-driven supplementary control approach with adaptive learning capability for air-breathing hypersonic vehicle tracking control based on action-dependent heuristic dynamic programming (ADHDP). The control action is generated by the combination of sliding mode control (SMC) and the ADHDP controller to track the desired velocity and the desired altitude. In particular, the ADHDP controller observes the differences between the actual velocity/altitude and the desired velocity/altitude, and then provides a supplementary control action accordingly. The ADHDP controller does not rely on the accurate mathematical model function and is data driven. Meanwhile, it is capable to adjust its parameters online over time under various working conditions, which is very suitable for hypersonic vehicle system with parameter uncertainties and disturbances. We verify the adaptive supplementary control approach versus the traditional SMC in the cruising flight, and provide three simulation studies to illustrate the improved performance with the proposed approach.
A Tutorial On Visual Servo Control This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Factorizing personalized Markov chains for next-basket recommendation Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.
Load Scheduling and Dispatch for Aggregators of Plug-In Electric Vehicles This paper proposes an operating framework for aggregators of plug-in electric vehicles (PEVs). First, a minimum-cost load scheduling algorithm is designed, which determines the purchase of energy in the day-ahead market based on the forecast electricity price and PEV power demands. The same algorithm is applicable for negotiating bilateral contracts. Second, a dynamic dispatch algorithm is developed, used for distributing the purchased energy to PEVs on the operating day. Simulation results are used to evaluate the proposed algorithms, and to demonstrate the potential impact of an aggregated PEV fleet on the power system.
Fast and Accurate Estimation of RFID Tags Radio frequency identification (RFID) systems have been widely deployed for various applications such as object tracking, 3-D positioning, supply chain management, inventory control, and access control. This paper concerns the fundamental problem of estimating RFID tag population size, which is needed in many applications such as tag identification, warehouse monitoring, and privacy-sensitive RFID systems. In this paper, we propose a new scheme for estimating tag population size called Average Run-based Tag estimation (ART). The technique is based on the average run length of ones in the bit string received using the standardized framed slotted Aloha protocol. ART is significantly faster than prior schemes. For example, given a required confidence interval of 0.1% and a required reliability of 99.9%, ART is consistently 7 times faster than the fastest existing schemes (UPE and EZB) for any tag population size. Furthermore, ART's estimation time is provably independent of the tag population sizes. ART works with multiple readers with overlapping regions and can estimate sizes of arbitrarily large tag populations. ART is easy to deploy because it neither requires modification to tags nor to the communication protocol between tags and readers. ART only needs to be implemented on readers as a software module.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.214222
0.214222
0.214222
0.214222
0.107111
0.013333
0.003778
0.000667
0
0
0
0
0
0
Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks This paper studies the decentralized kinematic control of multiple redundant manipulators for the cooperative task execution problem. The problem is formulated as a constrained quadratic programming problem and then a recurrent neural network with independent modules is proposed to solve the problem in a distributed manner. Each module in the neural network controls a single manipulator in real time without explicit communication with others and all the modules together collectively solve the common task. The global stability of the proposed neural network and the optimality of the neural solution are proven in theory. Application orientated simulations demonstrate the effectiveness of the proposed method.
Stability and Convergence Properties of Dynamic Average Consensus Estimators We analyze two different estimation algorithms for dynamic average consensus in sensing and communication networks, a proportional algorithm and a proportional-integral algorithm. We investigate the stability properties of these estimators under changing inputs and network topologies as well as their convergence properties under constant or slowly- varying inputs. In doing so, we discover that the more complex proportional-integral algorithm has performance benefits over the simpler proportional algorithm.
On NCP-Functions In this paper we reformulate several NCP-functionsfor the nonlinear complementarityproblem (NCP) from their merit function forms and studysome important properties of these NCP-functions. We point out thatsome of these NCP-functions have all the nice properties investigated by Chen, Chen and Kanzow [2] fora modified Fischer-Burmeister function, while some other NCP-functionsmay lose one or several of these properties. We alsoprovide a modified normal map and a smoothing technique toovercome the limitation of these NCP-functions. A numerical comparisonfor the behaviour of various NCP-functions is provided.
An Affection-Based Dynamic Leader Selection Model for Formation Control in Multirobot Systems. In this paper, a dynamic leader selection process of a multirobot system with leader-follower strategies is studied in terms of formation control. A fuzzy inference system is employed to evaluate the status of robots by means of their states. Based on the status, an affection-based model is proposed to trigger a leader selection module. Followers send out unsatisfied signals when they are disappointed at the current leader. The abashment value of the leader changes with its own status as well as the number of unsatisfied signals received from its followers. When its abashment value goes beyond a given threshold, a leader reselection process is triggered. Moreover, a swap-greedy algorithm is proposed to approximate the optimal solution for confirming the leader-follower relationship, which can be described as a combinatorial optimization problem to minimize the total travel distance of all the robots. Extensive simulation results demonstrate that the proposed model can improve the probability of a robot team escaping from local extreme points significantly, and even in the case of leader failure, the team can reselect a leader autonomously and keep moving toward the target.
A Strictly Predefined-Time Convergent Neural Solution to Equality- and Inequality-Constrained Time-Variant Quadratic Programming Aiming at time-variant problems solving, a special type of recurrent neural networks, termed zeroing neural network (ZNN), has been proposed, developed, and validated since 2001. Although equality-constrained time-variant quadratic programming (TVQP) has been well solved using the ZNN approach, TVQP problems with inequality constraints involved have not been satisfactorily handled by the existing ZNN models. To overcome this issue, this paper designs a ZNN model with exponential convergence for solving equality- and inequality-constrained TVQP problems. Considering a fast convergence is preferred in some time-critical applications in practice, a predefined-time stabilizer is for the first time utilized to endow the ZNN model with predefined-time convergence, leading to a predefined-time convergent ZNN (PTCZNN) model that exhibits an antecedently- and explicitly-defined convergence time. Theoretical analysis is performed with the convergence of the two ZNN models including the predefined-time convergence of the PTCZNN model rigorously proved. Validations are comparatively conducted to verify the effectiveness and superiority of the PTCZNN model in terms of convergence performance. To demonstrate the potential applications, the PTCZNN model is applied to image fusion and kinematic control of two robotic arms with joint limits considered. The efficacy and applicability of the PTCZNN model are validated by the illustrative examples. This is the first time to develop a ZNN model working as a quadratic programming solver that is applicable to kinematic control of robotic arms with joint constraints handled since the emergence of ZNNs.
From Wasd To Bls With Application To Pattern Classification The single-hidden-layer feedforward neural network (SLFN) exhibits an excellent approximation ability, despite its simple structure. This study examines two neural network models developed based on the general structure of the SLFN, i.e., the weights-and-structure-determination (WASD) neural network and broad learning system (BLS). The BLS and WASD neural networks exhibit many similarities in terms of their structure and algorithm, while BLS incorporates certain distinct ideas that render it an improved version of the WASD neural network. The research described in this paper is the first approach to establish a connection between the BLS and WASD neural networks. Moreover, pattern classification experiments on a foot dataset and several benchmark datasets are conducted using these two kinds of neural network models, and the performance of different models is compared and analyzed.Experimental results show that dividing the feature nodes into one group is the best choice for our foot dataset, and that the cross-layer connectivity and sparse coding of BLS can effectively improve the classification accuracy. In addition, it is shown that the dynamic update algorithm of BLS can improve the accuracy and save a lot of time when updating the model structure, but the final accuracy depends on the initial structure and parameters. Comparative experiments show that BLS achieves 89.12% classification accuracy for the foot dataset, which is the highest among all the demonstrated WASD neural networks and two other SLFN models. Finally, extended experiments on five benchmark datasets show that BLS can achieve satisfactory accuracy on multiple types of datasets, while WASD neural networks are possibly inadequate in processing image datasets in contrast. (C) 2021 Elsevier B.V. All rights reserved.
Robust Control for Mobility and Wireless Communication in Cyber-Physical Systems With Application to Robot Teams. In this paper, a system architecture to provide end-to-end network connectivity for autonomous teams of robots is discussed. The core of the proposed system is a cyber-physical controller whose goal is to ensure network connectivity as robots move to accomplish their assigned tasks. Due to channel quality uncertainties inherent to wireless propagation, we adopt a stochastic model where achievable ...
Event-Triggered Finite-Time Control for Networked Switched Linear Systems With Asynchronous Switching. This paper is concerned with the event-triggered finite-time control problem for networked switched linear systems by using an asynchronous switching scheme. Not only the problem of finite-time boundedness, but also the problem of input-output finite-time stability is considered in this paper. Compared with the existing event-triggered results of the switched systems, a new type of event-triggered...
Neural Architecture Transfer Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture ...
Are we ready for autonomous driving? The KITTI vision benchmark suite Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.
Human Shoulder Modeling Including Scapulo-Thoracic Constraint And Joint Sinus Cones In virtual human modeling, the shoulder is usually composed of clavicular, scapular and arm segments related by rotational joints. Although the model is improved, the realistic animation of the shoulder is hardly achieved. This is due to the fact that it is difficult to coordinate the simultaneous motion of the shoulder components in a consistent way. Also, the common use of independent one-degree of freedom (DOF) joint hierarchies does not properly render the 3-D accessibility space of real joints. On the basis of former biomechanical investigations, we propose here an extended shoulder model including scapulo-thoracic constraint and joint sinus cones. As a demonstration, the model is applied, using inverse kinematics, to the animation of a 3-D anatomic muscled skeleton model. (C) 2000 Elsevier Science Ltd. All rights reserved.
An Improved RSA Based User Authentication and Session Key Agreement Protocol Usable in TMIS. Recently, Giri et al.'s proposed a RSA cryptosystem based remote user authentication scheme for telecare medical information system and claimed that the protocol is secure against all the relevant security attacks. However, we have scrutinized the Giri et al.'s protocol and pointed out that the protocol is not secure against off-line password guessing attack, privileged insider attack and also suffers from anonymity problem. Moreover, the extension of password guessing attack leads to more security weaknesses. Therefore, this protocol needs improvement in terms of security before implementing in real-life application. To fix the mentioned security pitfalls, this paper proposes an improved scheme over Giri et al.'s scheme, which preserves user anonymity property. We have then simulated the proposed protocol using widely-accepted AVISPA tool which ensures that the protocol is SAFE under OFMC and CL-AtSe models, that means the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. The informal cryptanalysis has been also presented, which confirmed that the proposed protocol provides well security protection on the relevant security attacks. The performance analysis section compares the proposed protocol with other existing protocols in terms of security and it has been observed that the protocol provides more security and achieves additional functionalities such as user anonymity and session key verification.
OSMnx: New Methods for Acquiring, Constructing, Analyzing, and Visualizing Complex Street Networks. Urban scholars have studied street networks in various ways, but there are data availability and consistency limitations to the current urban planning/street network analysis literature. To address these challenges, this article presents OSMnx, a new tool to make the collection of data and creation and analysis of street networks simple, consistent, automatable and sound from the perspectives of graph theory, transportation, and urban design. OSMnx contributes five significant capabilities for researchers and practitioners: first, the automated downloading of political boundaries and building footprints; second, the tailored and automated downloading and constructing of street network data from OpenStreetMap; third, the algorithmic correction of network topology; fourth, the ability to save street networks to disk as shapefiles, GraphML, or SVG files; and fifth, the ability to analyze street networks, including calculating routes, projecting and visualizing networks, and calculating metric and topological measures. These measures include those common in urban design and transportation studies, as well as advanced measures of the structure and topology of the network. Finally, this article presents a simple case study using OSMnx to construct and analyze street networks in Portland, Oregon.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.0625
0.06
0.05
0.05
0.05
0.05
0.02
0.000455
0
0
0
0
0
0
Joint Trajectory and Communication Design for Multi-UAV Enabled Wireless Networks. Due to the high maneuverability, flexible deployment, and low cost, unmanned aerial vehicles (UAVs) have attracted significant interest recently in assisting wireless communication. This paper considers a multi-UAV enabled wireless communication system, where multiple UAV-mounted aerial base stations are employed to serve a group of users on the ground. To achieve fair performance among users, we ...
Trajectory Design and Power Control for Multi-UAV Assisted Wireless Networks: A Machine Learning Approach. A novel framework is proposed for the trajectory design of multiple unmanned aerial vehicles (UAVs) based on the prediction of usersu0027 mobility information. The problem of joint trajectory design and power control is formulated for maximizing the instantaneous sum transmit rate while satisfying the rate requirement of users. In an effort to solve this pertinent problem, a three-step approach is proposed which is based on machine learning techniques to obtain both the position information of users and the trajectory design of UAVs. Firstly, a multi-agent Q-learning based placement algorithm is proposed for determining the optimal positions of the UAVs based on the initial location of the users. Secondly, in an effort to determine the mobility information of users based on a real dataset, their position data is collected from Twitter to describe the anonymous user-trajectories in the physical world. In the meantime, an echo state network (ESN) based prediction algorithm is proposed for predicting the future positions of users based on the real dataset. Thirdly, a multi-agent Q-learning based algorithm is conceived for predicting the position of UAVs in each time slot based on the movement of users. The algorithm is proved to be able to converge to an optimal state. In this algorithm, multiple UAVs act as agents to find optimal actions by interacting with their environment and learn from their mistakes. Numerical results are provided to demonstrate that as the size of the reservoir increases, the proposed ESN approach improves the prediction accuracy. Finally, we demonstrate that throughput gains of about $17%$ are achieved.
Deep Reinforcement Learning for User Association and Resource Allocation in Heterogeneous Cellular Networks. Heterogeneous cellular networks can offload the mobile traffic and reduce the deployment costs, which have been considered to be a promising technique in the next-generation wireless network. Due to the non-convex and combinatorial characteristics, it is challenging to obtain an optimal strategy for the joint user association and resource allocation issue. In this paper, a reinforcement learning (...
Unmanned Aerial Vehicle-Aided Communications: Joint Transmit Power and Trajectory Optimization. This letter investigates the transmit power and trajectory optimization problem for unmanned aerial vehicle (UAV)-aided networks. Different from majority of the existing studies with fixed communication infrastructure, a dynamic scenario is considered where a flying UAV provides wireless services for multiple ground nodes simultaneously. To fully exploit the controllable channel variations provide...
Interference Management for Cellular-Connected UAVs: A Deep Reinforcement Learning Approach In this paper, an interference-aware path planning scheme for a network of cellular-connected unmanned aerial vehicles (UAVs) is proposed. In particular, each UAV aims at achieving a tradeoff between maximizing energy efficiency and minimizing both wireless latency and the interference caused on the ground network along its path. The problem is cast as a dynamic game among UAVs. To solve this game, a deep reinforcement learning algorithm, based on echo state network (ESN) cells, is proposed. The introduced deep ESN architecture is trained to allow each UAV to map each observation of the network state to an action, with the goal of minimizing a sequence of time-dependent utility functions. Each UAV uses the ESN to learn its optimal path, transmission power, and cell association vector at different locations along its path. The proposed algorithm is shown to reach a subgame perfect Nash equilibrium upon convergence. Moreover, an upper bound and a lower bound for the altitude of the UAVs are derived thus reducing the computational complexity of the proposed algorithm. The simulation results show that the proposed scheme achieves better wireless latency per UAV and rate per ground user (UE) while requiring a number of steps that are comparable to a heuristic baseline that considers moving via the shortest distance toward the corresponding destinations. The results also show that the optimal altitude of the UAVs varies based on the ground network density and the UE data rate requirements and plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV.
3D Placement of an Unmanned Aerial Vehicle Base Station (UAV-BS) for Energy-Efficient Maximal Coverage. Unmanned aerial vehicle mounted base stations (UAV-BSs) can provide wireless services in a variety of scenarios. In this letter, we propose an optimal placement algorithm for UAV-BSs that maximizes the number of covered users using the minimum transmit power. We decouple the UAV-BS deployment problem in the vertical and horizontal dimensions without any loss of optimality. Furthermore, we model the UAV-BS deployment in the horizontal dimension as a circle placement problem and a smallest enclosing circle problem. Simulations are conducted to evaluate the performance of the proposed method for different spatial distributions of the users.
Learning-Based Energy-Efficient Data Collection by Unmanned Vehicles in Smart Cities. Mobile crowdsourcing (MCS) is now an important source of information for smart cities, especially with the help of unmanned aerial vehicles (UAVs) and driverless cars. They are equipped with different kinds of high-precision sensors, and can be scheduled/controlled completely during data collection, which will make MCS system more robust. However, they are limited to energy constraint, especially ...
Status updates over unreliable multiaccess channels Applications like environmental sensing, and health and activity sensing, are supported by networks of devices (nodes) that send periodic packet transmissions over the wireless channel to a sink node. We look at simple abstractions that capture the following commonalities of such networks (a) the nodes send periodically sensed information that is temporal and must be delivered in a timely manner, (b) they share a multiple access channel and (c) channels between the nodes and the sink are unreliable (packets may be received in error) and differ in quality. We consider scheduled access and slotted ALOHA-like random access. Under scheduled access, nodes take turns and get feedback on whether a transmitted packet was received successfully by the sink. During its turn, a node may transmit more than once to counter channel uncertainty. For slotted ALOHA-like access, each node attempts transmission in every slot with a certain probability. For these access mechanisms we derive the age of information (AoI), which is a timeliness metric, and arrive at conditions that optimize AoI at the sink. We also analyze the case of symmetric updating, in which updates from different nodes must have the same AoI. We show that ALOHA-like access, while simple, leads to AoI that is worse by a factor of about 2e, in comparison to scheduled access.
The dynamic routing algorithm for renewable wireless sensor networks with wireless power transfer. Wireless power transfer is recently considered as a potential approach to remove the lifetime performance bottleneck for wireless sensor networks. By using a wireless charging vehicle (WCV) to periodically recharge each sensor node’s battery, a wireless sensor network may remain operational forever. In this paper, we aim to jointly optimize a dynamic multi-hop data routing, a traveling path (for the WCV to visit all the sensor nodes in a cycle), and a charging schedule (charging time for each sensor node) such that the ratio of the WCV’s vacation time over the cycle time can be maximized. The key challenge of this problem (caused by time-varying data routing) is the integration and differentiation terms in problem formulation, which yields a very challenging non-polynomial program. To remove these non-polynomial terms, we introduce the concept of (N+1)-phase solution, which adopt a special dynamic routing scheme. We prove that an optimal (N+1)-phase solution can achieve the same objective value as that by an optimal time-varying solution. We further prove that the optimal traveling path must follow the shortest Hamiltonian cycle. Finally, we linearize the problem for data routing and charging schedule and thus obtain an optimal solution in polynomial-time.
Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation There is an increasing interest in a fast-growing machine learning technique called Federated Learning (FL), in which the model training is distributed over mobile user equipment (UEs), exploiting UEs' local computation and training data. Despite its advantages such as preserving data privacy, FL still has challenges of heterogeneity across UEs' data and physical resources. To address these challenges, we first propose FEDL, a FL algorithm which can handle heterogeneous UE data without further assumptions except strongly convex and smooth loss functions. We provide a convergence rate characterizing the trade-off between local computation rounds of each UE to update its local model and global communication rounds to update the FL global model. We then employ FEDL in wireless networks as a resource allocation optimization problem that captures the trade-off between FEDL convergence wall clock time and energy consumption of UEs with heterogeneous computing and power resources. Even though the wireless resource allocation problem of FEDL is non-convex, we exploit this problem's structure to decompose it into three sub-problems and analyze their closed-form solutions as well as insights into problem design. Finally, we empirically evaluate the convergence of FEDL with PyTorch experiments, and provide extensive numerical results for the wireless resource allocation sub-problems. Experimental results show that FEDL outperforms the vanilla FedAvg algorithm in terms of convergence rate and test accuracy in various settings.
Analysis of Software Aging in a Web Server Several recent studies have reported & examined the phenomenon that long-running software systems show an increasing failure rate and/or a progressive degradation of their performance. Causes of this phenomenon, which has been referred to as &#34;software aging&#34;, are the accumulation of internal error conditions, and the depletion of operating system resources. A proactive technique called &#34;software r...
Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision based problems. However, deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest to develop explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose Grad-CAM++ to provide better visual explanations of CNN model predictions (when compared to Grad-CAM), in terms of better localization of objects as well as explaining occurrences of multiple objects of a class in a single image. We provide a mathematical explanation for the proposed method, Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ indeed provides better visual explanations for a given CNN architecture when compared to Grad-CAM.
Large-Scale Hierarchical Text Classification with Recursively Regularized Deep Graph-CNN. Text classification to a hierarchical taxonomy of topics is a common and practical problem. Traditional approaches simply use bag-of-words and have achieved good results. However, when there are a lot of labels with different topical granularities, bag-of-words representation may not be enough. Deep learning models have been proven to be effective to automatically learn different levels of representations for image data. It is interesting to study what is the best way to represent texts. In this paper, we propose a graph-CNN based deep learning model to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph. Graph-of-words representation of texts has the advantage of capturing non-consecutive and long-distance semantics. CNN models have the advantage of learning different level of semantics. To further leverage the hierarchy of labels, we regularize the deep architecture with the dependency among labels. Our results on both RCV1 and NYTimes datasets show that we can significantly improve large-scale hierarchical text classification over traditional hierarchical text classification and existing deep models.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.024119
0.026667
0.026667
0.025304
0.016944
0.014766
0.008889
0.000346
0.000015
0
0
0
0
0
Semantic Parsing With Syntax- And Table-Aware Sql Generation We present a generative model to map natural language questions into SQL queries. Existing neural network based approaches typically generate a SQL query word-by-word, however, a large portion of the generated results is incorrect or not executable due to the mismatch between question words and table contents. Our approach addresses this problem by considering the structure of table and the syntax of SQL language. The quality of the generated SQL query is significantly improved through (1) learning to replicate content from column names, cells or SQL keywords; and (2) improving the generation of WHERE clause by leveraging the column-cell relation. Experiments are conducted on WikiSQL, a recently released dataset with the largest question-SQL pairs. Our approach significantly improves the state-of-the-art execution accuracy from 69.0% to 74.4%.
Recall-Oriented Evaluation for Information Retrieval Systems. In a recall context, the user is interested in retrieving all relevant documents rather than retrieving a few that are at the top of the results list. In this article we propose ROM (Recall Oriented Measure) which takes into account the main elements that should be considered in evaluating information retrieval systems while ordering them in a way explicitly adapted to a recall context.
Tight Hardness Results for LCS and Other Sequence Similarity Measures Two important similarity measures between sequences are the longest common subsequence (LCS) and the dynamic time warping distance (DTWD). The computations of these measures for two given sequences are central tasks in a variety of applications. Simple dynamic programming algorithms solve these tasks in O(n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ) time, and despite an extensive amount of research, no algorithms with significantly better worst case upper bounds are known. In this paper, we show that for any constant ε >0, an O(n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2-ε</sup> ) time algorithm for computing the LCS or the DTWD of two sequences of length n over a constant size alphabet, refutes the popular Strong Exponential Time Hypothesis (SETH).
A Hierarchical Latent Structure for Variational Conversation Modeling. Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling. However, they suffer from the notorious degeneration problem, where the decoders learn to ignore latent variables and reduce to vanilla RNNs. We empirically show that this degeneracy occurs mostly due to two reasons. First, the expressive power of hierarchical RNN decoders is often high enough to model the data using only its decoding distributions without relying on the latent variables. Second, the conditional VAE structure whose generation process is conditioned on a context, makes the range of training targets very sparse; that is, the RNN decoders can easily overfit to the training data ignoring the latent variables. To solve the degeneration problem, we propose a novel model named Variational Hierarchical Conversation RNNs (VHCR), involving two key ideas of (1) using a hierarchical structure of latent variables, and (2) exploiting an utterance drop regularization. With evaluations on two datasets of Cornell Movie Dialog and Ubuntu Dialog Corpus, we show that our VHCR successfully utilizes latent variables and outperforms state-of-the-art models for conversation generation. Moreover, it can perform several new utterance control tasks, thanks to its hierarchical latent structure.
CASNet: A Cross-Attention Siamese Network for Video Salient Object Detection. Recent works on video salient object detection have demonstrated that directly transferring the generalization ability of image-based models to video data without modeling spatial-temporal information remains nontrivial and challenging. Considering both intraframe accuracy and interframe consistency of saliency detection, this article presents a novel cross-attention based encoder–decoder model un...
A Syntactic Neural Model For General-Purpose Code Generation We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
One billion word benchmark for measuring progress in statistical language modeling. We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned Kneser-Ney 5-gram model achieves perplexity 67.6; a combination of techniques leads to 35% reduction in perplexity, or 10% reduction in cross-entropy (bits), over that baseline. The benchmark is available as a code.google.com project; besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider a MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources􀀀the transmit precoding matrices of the MUs􀀀and the computational resources􀀀the CPU cycles/second assigned by the cloud to each MU􀀀in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.
Distributed multirobot localization In this paper, we present a new approach to the problem of simultaneously localizing a group of mobile robots capable of sensing one another. Each of the robots collects sensor data regarding its own motion and shares this information with the rest of the team during the update cycles. A single estimator, in the form of a Kalman filter, processes the available positioning information from all the members of the team and produces a pose estimate for every one of them. The equations for this centralized estimator can be written in a decentralized form, therefore allowing this single Kalman filter to be decomposed into a number of smaller communicating filters. Each of these filters processes the sensor data collected by its host robot. Exchange of information between the individual filters is necessary only when two robots detect each other and measure their relative pose. The resulting decentralized estimation schema, which we call collective localization, constitutes a unique means for fusing measurements collected from a variety of sensors with minimal communication and processing requirements. The distributed localization algorithm is applied to a group of three robots and the improvement in localization accuracy is presented. Finally, a comparison to the equivalent decentralized information filter is provided.
A simplified dual neural network for quadratic programming with its KWTA application. The design, analysis, and application of a new recurrent neural network for quadratic programming, called simplified dual neural network, are discussed. The analysis mainly concentrates on the convergence property and the computational complexity of the neural network. The simplified dual neural network is shown to be globally convergent to the exact optimal solution. The complexity of the neural network architecture is reduced with the number of neurons equal to the number of inequality constraints. Its application to k-winners-take-all (KWTA) operation is discussed to demonstrate how to solve problems with this neural network.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.1055
0.1
0.1
0.1
0.1
0.04045
0.000867
0
0
0
0
0
0
0
Age-optimal Sampling and Transmission Scheduling in Multi-Source Systems. In this paper, we consider the problem of minimizing the age of information in a multi-source system, where samples are taken from multiple sources and sent to a destination via a channel with random delay. Due to interference, only one source can be scheduled at a time. We consider the problem of finding a decision policy that determines the sampling times and transmission order of the sources for minimizing the total average peak age (TaPA) and the total average age (TaA) of the sources. Our investigation of this problem results in an important separation principle: The optimal scheduling strategy and the optimal sampling strategy are independent of each other. In particular, we prove that, for any given sampling strategy, the Maximum Age First (MAF) scheduling strategy provides the best age performance among all scheduling strategies. This transforms our overall optimization problem into an optimal sampling problem, given that the decision policy follows the MAF scheduling strategy. While the zero-wait sampling strategy (in which a sample is generated once the channel becomes idle) is shown to be optimal for minimizing the TaPA, it does not always minimize the TaA. We use Dynamic Programming (DP) to investigate the optimal sampling problem for minimizing the TaA. Finally, we provide an approximate analysis of Bellman's equation to approximate the TaA-optimal sampling strategy by a water-filling solution which is shown to be very close to optimal through numerical evaluations.
Age-Minimal Transmission for Energy Harvesting Sensors With Finite Batteries: Online Policies An energy-harvesting sensor node that is sending status updates to a destination is considered. The sensor is equipped with a battery of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">finite</italic> size to save its incoming energy, and consumes one unit of energy per status update transmission, which is delivered to the destination instantly over an error-free channel. The setting is <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">online</italic> in which the harvested energy is revealed to the sensor causally over time after it arrives, and the goal is to design status update transmission times (policy) such that the long term average <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">age of information</italic> (AoI) is minimized. The AoI is defined as the time elapsed since the latest update has reached at the destination. Two energy arrival models are considered: a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">random battery recharge</italic> (RBR) model, and an <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">incremental battery recharge</italic> (IBR) model. In both models, energy arrives according to a Poisson process with unit rate, with values that completely fill up the battery in the RBR model, and with values that fill up the battery incrementally in a unit-by-unit fashion in the IBR model. The key approach to characterizing the optimal status update policy for both models is showing the optimality of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">renewal policies</italic> , in which the inter-update times follow a renewal process in a certain manner that depends on the energy arrival model and the battery size. It is then shown that the optimal renewal policy has an energy-dependent <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">threshold</italic> structure, in which the sensor sends a status update only if the AoI grows above a certain threshold that depends on the energy available in its battery. For both the random and the incremental battery recharge models, the optimal energy-dependent thresholds are characterized <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">explicitly</italic> , i.e., in closed-form, in terms of the optimal long term average AoI. It is also shown that the optimal thresholds are monotonically decreasing in the energy available in the battery, and that the smallest threshold, which comes in effect when the battery is full, is equal to the optimal long term average AoI.
Minimizing the Age of Information in Wireless Networks with Stochastic Arrivals. We consider a wireless network with a base station serving multiple traffic streams to different destinations. Packets from each stream arrive to the base station according to a stochastic process and are enqueued in a separate (per stream) queue. The queueing discipline controls which packet within each queue is available for transmission. The base station decides, at every time t, which stream to serve to the corresponding destination. The goal of scheduling decisions is to keep the information at the destinations fresh. Information freshness is captured by the Age of Information (AoI) metric. In this paper, we derive a lower bound on the AoI performance achievable by any given network operating under any queueing discipline. Then, we consider three common queueing disciplines and develop both an Optimal Stationary Randomized policy and a Max-Weight policy under each discipline. Our approach allows us to evaluate the combined impact of the stochastic arrivals, queueing discipline and scheduling policy on AoI. We evaluate the AoI performance both analytically and using simulations. Numerical results show that the performance of the Max-Weight policy is close to the analytical lower bound.
Minimizing Age-of-Information with Throughput Requirements in Multi-Path Network Communication We consider the scenario where a sender periodically sends a batch of data to a receiver over a multi-hop network, possibly using multiple paths. Our objective is to minimize peak/average Age-of-Information (AoI) subject to throughput requirements. The consideration of batch generation and multi-path communication differentiates our AoI study from existing ones. We first show that our AoI minimization problems are NP-hard, but only in the weak sense, as we develop an optimal algorithm with a pseudo-polynomial time complexity. We then prove that minimizing AoI and minimizing maximum delay are "roughly" equivalent, in the sense that any optimal solution of the latter is an approximate solution of the former with bounded optimality loss. We leverage this understanding to design a general approximation framework for our problems. It can build upon any α-approximation algorithm of the maximum delay minimization problem, e.g., the algorithm in [13] with α = 1 + ε given any user-defined ε > 0, to construct an (α + c)-approximate solution for minimizing AoI. Here c is a constant depending on the throughput requirements. Simulations over various network topologies validate the effectiveness of our approach.
Cost of not splitting in routing: characterization and estimation This paper studies the performance difference of joint routing and congestion control when either single-path routes or multipath routes are used. Our performance metric is the total utility achieved by jointly optimizing transmission rates using congestion control and paths using source routing. In general, this performance difference is strictly positive and hard to determine--in fact an NP-hard problem. To better estimate this performance gap, we develop analytical bounds to this "cost of not splitting" in routing. We prove that the number of paths needed for optimal multipath routing differs from that of optimal single-path routing by no more than the number of links in the network. We provide a general bound on the performance loss, which is independent of the number of source-destination pairs when the latter is larger than the number of links in a network. We also propose a vertex projection method and combine it with a greedy branch-and-bound algorithm to provide progressively tighter bounds on the performance loss. Numerical examples are used to show the effectiveness of our approximation technique and estimation algorithms.
Minimum Age TDMA Scheduling We consider a transmission scheduling problem in which multiple systems receive update information through a shared Time Division Multiple Access (TDMA) channel. To provide timely delivery of update information, the problem asks for a schedule that minimizes the overall age of information. We call this problem the Min-Age problem. This problem is first studied by He et at. [IEEE Trans. Inform. Theory, 2018], who identified several special cases where the problem can be solved optimally in polynomial time. Our contribution is threefold. First, we introduce a new job scheduling problem called the Min-WCS problem, and we prove that, for any constant r ≥ 1, every r-approximation algorithm for the Min-WCS problem can be transformed into an r-approximation algorithm for the Min-Age problem. Second, we give a randomized 2.733-approximation algorithm and a dynamic-programming-based exact algorithm for the Min-WCS problem. Finally, we prove that the Min-Age problem is NP-hard.
Delivering Deep Learning to Mobile Devices via Offloading Deep learning has the potential to make Augmented Reality (AR) devices smarter, but few AR apps use such technology today because it is compute-intensive, and front-end devices cannot deliver sufficient compute power. We propose a distributed framework that ties together front-end devices with more powerful back-end \"helpers\" that allow deep learning to be executed locally or to be offloaded. This framework should be able to intelligently use current estimates of network conditions and back-end server loads, in conjunction with the application's requirements, to determine an optimal strategy. This work reports our preliminary investigation in implementing such a framework, in which the front-end is assumed to be smartphones. Our specific contributions include: (1) development of an Android application that performs real-time object detection, either locally on the smartphone or remotely on a server; and (2) characterization of the tradeoffs between object detection accuracy, latency, and battery drain, based on the system parameters of video resolution, deep learning model size, and offloading decision.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
A Privacy-Preserving and Copy-Deterrence Content-Based Image Retrieval Scheme in Cloud Computing. With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.
Grey Wolf Optimizer. This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves (Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known test functions, and the results are verified by a comparative study with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming (EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very competitive results compared to these well-known meta-heuristics. The paper also considers solving three classical engineering design problems (tension/compression spring, welded beam, and pressure vessel designs) and presents a real application of the proposed method in the field of optical engineering. The results of the classical engineering design problems and real application prove that the proposed algorithm is applicable to challenging problems with unknown search spaces.
J-RoC: A Joint Routing and Charging scheme to prolong sensor network lifetime The emerging wireless charging technology creates a controllable and perpetual energy source to provide wireless power over distance. Schemes have been proposed to make use of wireless charging to prolong the sensor network lifetime. Unfortunately, existing schemes only passively replenish sensors that are deficient in energy supply, and cannot fully leverage the strengths of this technology. To address the limitation, we propose J-RoC - a practical and efficient Joint Routing and Charging scheme. Through proactively guiding the routing activities in the network and delivering energy to where it is needed, J-RoC not only replenishes energy into the network but also effectively improves the network energy utilization, thus prolonging the network lifetime. To evaluate the performance of the J-RoC scheme, we conduct experiments in a small-scale testbed and simulations in large-scale networks. Evaluation results demonstrate that J-RoC significantly elongates the network lifetime compared to existing wireless charging based schemes.
An Improved RSA Based User Authentication and Session Key Agreement Protocol Usable in TMIS. Recently, Giri et al.'s proposed a RSA cryptosystem based remote user authentication scheme for telecare medical information system and claimed that the protocol is secure against all the relevant security attacks. However, we have scrutinized the Giri et al.'s protocol and pointed out that the protocol is not secure against off-line password guessing attack, privileged insider attack and also suffers from anonymity problem. Moreover, the extension of password guessing attack leads to more security weaknesses. Therefore, this protocol needs improvement in terms of security before implementing in real-life application. To fix the mentioned security pitfalls, this paper proposes an improved scheme over Giri et al.'s scheme, which preserves user anonymity property. We have then simulated the proposed protocol using widely-accepted AVISPA tool which ensures that the protocol is SAFE under OFMC and CL-AtSe models, that means the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. The informal cryptanalysis has been also presented, which confirmed that the proposed protocol provides well security protection on the relevant security attacks. The performance analysis section compares the proposed protocol with other existing protocols in terms of security and it has been observed that the protocol provides more security and achieves additional functionalities such as user anonymity and session key verification.
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey. This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
An Adaptive Fuzzy Recurrent Neural Network for Solving the Nonrepetitive Motion Problem of Redundant Robot Manipulators. In order to effectively decrease the joint-angular drifts and end-effector position accumulation errors, a novel adaptive fuzzy recurrent neural network (AFRNN) is proposed and exploited to solve the nonrepetitive motion problem of redundant robot manipulators in this paper. First, a quadratic programming (QP)-based repetitive motion scheme is designed according to the kinematics constraint of red...
Pseudoinverse-type bi-criteria minimization scheme for redundancy resolution of robot manipulators In this paper, a pseudoinverse-type bi-criteria minimization scheme is proposed and investigated for the redundancy resolution of robot manipulators at the joint-acceleration level. Such a bi-criteria minimization scheme combines the weighted minimum acceleration norm solution and the minimum velocity norm solution via a weighting factor. The resultant bi-criteria minimization scheme, formulated as the pseudoinverse-type solution, not only avoids the high joint-velocity and joint-acceleration phenomena but also causes the joint velocity to be near zero at the end of motion. Computer simulation results based on a 4-Degree-of-Freedom planar robot manipulator comprising revolute joints further verify the efficacy and flexibility of the proposed bi-criteria minimization scheme on robotic redundancy resolution.
Kinematic Control of Continuum Manipulators Using a Fuzzy-Model-Based Approach. Continuum manipulators are a rapidly emerging class of robots. However, due to the complexity of their mathematical models and modeling inaccuracies, the development of effective control systems is a particularly challenging task. This paper presents the first attempt on kinematic control of continuum manipulators using a fuzzy-model-based approach. A fuzzy controller is proposed for autonomous ex...
Multilateral Teleoperation With New Cooperative Structure Based on Reconfigurable Robots and Type-2 Fuzzy Logic. This paper develops an innovative multilateral teleoperation system with two haptic devices on the master side and a newly designed reconfigurable multi-fingered robot on the slave side. A novel nonsingular fast terminal sliding-mode algorithm, together with varying dominance factors for cooperation, is proposed to offer this system&#39;s fast position and force tracking, as well as an integrated perc...
Compatible Convex-Nonconvex Constrained QP-Based Dual Neural Networks for Motion Planning of Redundant Robot Manipulators Redundant robot manipulators possess huge potential of applications because of their superior flexibility and outstanding accuracy, but their real-time control is a challenging problem. In this brief, a novel compatible convex-nonconvex constrained quadratic programming (CCNC-QP)-based dual neural network (DNN) scheme is proposed for motion planning of redundant robot manipulators. The proposed CC...
Pose Characterization and Analysis of Soft Continuum Robots With Modeling Uncertainties Based on Interval Arithmetic This paper introduces a systematical interval-based framework of inherent uncertainties and pose evaluation for a class of soft continuum robots driven by flexible shafts. A more general model of continuum robots driven by shaft tendons is extended from prior kinematic models. On top of the proposed model, the interval-based analysis is presented to analyze and characterize the pose of continuum robots considering uncertainties in kinematic parameters and joint inputs. A 3-D printed bending actuator driven by a flexible shaft is evaluated for case study based on the proposed interval-valued framework. This paper investigates and compares a couple of refinement methods and proposes a new way of sensitivity analysis of model parameters based on interval arithmetic. The kinematic and mechanics parameters are measured and identified experimentally with a representation of intervals. The in-plane motion experiment validates that the computed bounds can enclose all the measured tip positions with consideration of the measurement uncertainty. The method is also validated when external loading is exerted. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</italic> —With the fast development of soft robotics, the increasing number of soft robotic manipulators show great potentials of application in industries such as agriculture, biomedicine, home automation, manufacturing, logistics, and domestic service. Thanks to the mechanical compliance, soft manipulators demonstrate environmental adaptability at the cost of very precise positioning. However, the guaranteed bounds of the reaching range are valuable in order to provide users a good knowledge of the product performance, which is the motivation of this paper. This paper provides not only a method and framework based on interval analysis to evaluate the pose of soft actuators, but also a case study of uncertainty interval characterization procedure which is domain-specific for soft actuators. The presented method framework can be implemented for continuum manipulators and soft actuators as well as any other robotic devices featured by such flexible components.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Revenue-optimal task scheduling and resource management for IoT batch jobs in mobile edge computing With the growing prevalence of Internet of Things (IoT) devices and technology, a burgeoning computing paradigm namely mobile edge computing (MEC) is delicately proposed and designed to accommodate the application requirements of IoT scenario. In this paper, we focus on the problems of dynamic task scheduling and resource management in MEC environment, with the specific objective of achieving the optimal revenue earned by edge service providers. While the majority of task scheduling and resource management algorithms are formulated by an integer programming (IP) problem and solved in a dispreferred NP-hard manner, we innovatively investigate the problem structure and identify a favorable property namely totally unimodular constraints. The totally unimodular property further helps to design an equivalent linear programming (LP) problem which can be efficiently and elegantly solved at polynomial computational complexity. In order to evaluate our proposed approach, we conduct simulations based on real-life IoT dataset to verify the effectiveness and efficiency of our approach.
Efficient k-out-of-n oblivious transfer schemes with adaptive and non-adaptive queries In this paper we propose efficient two-round k-out-of-n oblivious transfer schemes, in which R sends O(k) messages to S, and S sends O(n) messages back to R. The computation cost of R and S is reasonable. The choices of R are unconditionally secure. For the basic scheme, the secrecy of unchosen messages is guaranteed if the Decisional Diffie-Hellman problem is hard. When k=1, our basic scheme is as efficient as the most efficient 1-out-of-n oblivious transfer scheme. Our schemes have the nice property of universal parameters, that is each pair of R and S need neither hold any secret key nor perform any prior setup (initialization). The system parameters can be used by all senders and receivers without any trapdoor specification. Our k-out-of-n oblivious transfer schemes are the most efficient ones in terms of the communication cost, in both rounds and the number of messages. Moreover, one of our schemes can be extended in a straightforward way to an adaptivek-out-of-n oblivious transfer scheme, which allows the receiver R to choose the messages one by one adaptively. In our adaptive-query scheme, S sends O(n) messages to R in one round in the commitment phase. For each query of R, only O(1) messages are exchanged and O(1) operations are performed. In fact, the number k of queries need not be pre-fixed or known beforehand. This makes our scheme highly flexible.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
An Automatic Screening Approach for Obstructive Sleep Apnea Diagnosis Based on Single-Lead Electrocardiogram Traditional approaches for obstructive sleep apnea (OSA) diagnosis are apt to using multiple channels of physiological signals to detect apnea events by dividing the signals into equal-length segments, which may lead to incorrect apnea event detection and weaken the performance of OSA diagnosis. This paper proposes an automatic-segmentation-based screening approach with the single channel of Electrocardiogram (ECG) signal for OSA subject diagnosis, and the main work of the proposed approach lies in three aspects: (i) an automatic signal segmentation algorithm is adopted for signal segmentation instead of the equal-length segmentation rule; (ii) a local median filter is improved for reduction of the unexpected RR intervals before signal segmentation; (iii) the designed OSA severity index and additional admission information of OSA suspects are plugged into support vector machine (SVM) for OSA subject diagnosis. A real clinical example from PhysioNet database is provided to validate the proposed approach and an average accuracy of 97.41% for subject diagnosis is obtained which demonstrates the effectiveness for OSA diagnosis.
Multiple switching-time-dependent discretized Lyapunov functions/functionals methods for stability analysis of switched time-delay stochastic systems. This paper presents novel approaches for stability analysis of switched linear time-delay stochastic systems under dwell time constraint. Instead of using comparison principle, piecewise switching-time-dependent discretized Lyapunov functions/functionals are introduced to analyze the stability of switched stochastic systems with constant or time-varying delays. These Lyapunov functions/functionals are decreasing during the dwell time and non-increasing at switching instants, which lead to two mode-dependent dwell-time-based delay-independent stability criteria for the switched systems without restricting the stability of the subsystems. Comparison and numerical examples are provided to show the efficiency of the proposed results.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Minimizing the Age of Information in Wireless Networks with Stochastic Arrivals We consider a wireless network with a base station serving multiple traffic streams to different destinations. Packets from each stream arrive to the base station according to a stochastic process and are enqueued in a separate (per stream) queue. The queueing discipline controls which packet within each queue is available for transmission. The base station decides, at every time t, which stream to serve to the corresponding destination. The goal of scheduling decisions is to keep the information at the destinations fresh. Information freshness is captured by the Age of Information (AoI) metric. In this paper, we derive a lower bound on the AoI performance achievable by any given network operating under any queueing discipline. Then, we consider three common queueing disciplines and develop both an Optimal Stationary Randomized policy and a Max-Weight policy under each discipline. Our approach allows us to evaluate the combined impact of the stochastic arrivals, queueing discipline and scheduling policy on AoI. We evaluate the AoI performance both analytically and using simulations. Numerical results show that the performance of the Max-Weight policy is close to the analytical lower bound.
A Low-Complexity Analytical Modeling for Cross-Layer Adaptive Error Protection in Video Over WLAN We find a low-complicity and accurate model to solve the problem of optimizing MAC-layer transmission of real-time video over wireless local area networks (WLANs) using cross-layer techniques. The objective in this problem is to obtain the optimal MAC retry limit in order to minimize the total packet loss rate. First, the accuracy of Fluid and M/M/1/K analytical models is examined. Then we derive a closed-form expression for service time in WLAN MAC transmission, and will use this in mathematical formulation of our optimization problem based on M/G/1 model. Subsequently we introduce an approximate and simple formula for MAC-layer service time, which leads to the M/M/1 model. Compared with M/G/1, we particularly show that our M/M/1-based model provides a low-complexity and yet quite accurate means for analyzing MAC transmission process in WLAN. Using our M/M/1 model-based analysis, we derive closed-form formulas for the packet overflow drop rate and optimum retry-limit. These closed-form expressions can be effectively invoked for analyzing adaptive retry-limit algorithms. Simulation results (network simulator-2) will verify the accuracy of our analytical models.
Poisson Arrivals See Time Averages In many stochastic models, particularly in queueing theory, Poisson arrivals both observe see a stochastic process and interact with it. In particular cases and/or under restrictive assumptions it ...
Data Aggregation and Packet Bundling of Uplink Small Packets for Monitoring Applications in LTE. In cellular massive machine-type communications, a device can transmit directly to the BS or through an aggregator (intermediate node). While direct device-BS communication has recently been the focus of 5G/3GPP research and standardization efforts, the use of aggregators remains a less explored topic. In this article we analyze the deployment scenarios in which aggregators can perform cellular ac...
Reliable Transmission of Short Packets through Queues and Noisy Channels under Latency and Peak-Age Violation Guarantees. This paper investigates the probability that the delay and the peak-age of information exceed a desired threshold in a point-to-point communication system with short information packets. The packets are generated according to a stationary memoryless Bernoulli process, placed in a single-server queue and then transmitted over a wireless channel. A variable-length stop-feedback coding scheme—a general strategy that encompasses simple automatic repetition request (ARQ) and more sophisticated hybrid ARQ techniques as special cases—is used by the transmitter to convey the information packets to the receiver. By leveraging finite-blocklength results, the delay violation and the peak-age violation probabilities are characterized without resorting to approximations based on larg-deviation theory as in previous literature. Numerical results illuminate the dependence of delay and peak-age violation probability on system parameters such as the frame size and the undetected error probability, and on the chosen packet-management policy. The guidelines provided by our analysis are particularly useful for the design of low-latency ultra-reliable communication systems.
Age of information in a decentralized network of parallel queues with routing and packets losses The paper deals with age of information (AoI) in a network of multiple sources and parallel queues with buffering capabilities, preemption in service and losses in served packets. The queues do not communicate between each other and the packets are dispatched through the queues according to a predefined probabilistic routing. By making use of the stochastic hybrid system (SHS) method, we provide a...
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Revenue-optimal task scheduling and resource management for IoT batch jobs in mobile edge computing With the growing prevalence of Internet of Things (IoT) devices and technology, a burgeoning computing paradigm namely mobile edge computing (MEC) is delicately proposed and designed to accommodate the application requirements of IoT scenario. In this paper, we focus on the problems of dynamic task scheduling and resource management in MEC environment, with the specific objective of achieving the optimal revenue earned by edge service providers. While the majority of task scheduling and resource management algorithms are formulated by an integer programming (IP) problem and solved in a dispreferred NP-hard manner, we innovatively investigate the problem structure and identify a favorable property namely totally unimodular constraints. The totally unimodular property further helps to design an equivalent linear programming (LP) problem which can be efficiently and elegantly solved at polynomial computational complexity. In order to evaluate our proposed approach, we conduct simulations based on real-life IoT dataset to verify the effectiveness and efficiency of our approach.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
A robust medical image watermarking against salt and pepper noise for brain MRI images. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. During the transmission of medical images between hospitals or specialists through the network, the main priority is to protect a patient's documents against any act of tampering by unauthorised individuals. Because of this, there is a need for medical image authentication scheme to enable proper diagnosis on patient. In addition, medical images are also susceptible to salt and pepper impulse noise through the transmission in communication channels. This noise may also be intentionally used by the invaders to corrupt the embedded watermarks inside the medical images. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this work addresses the issue of designing a new watermarking method that can withstand high density of salt and pepper noise for brain MRI images. For this purpose, combination of a spatial domain watermarking method, channel coding and noise filtering schemes are used. The region of non-interest (RONI) of MRI images from five different databases are used as embedding area and electronic patient record (EPR) is considered as embedded data. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER).
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Model-Free Radio Map Estimation in Massive MIMO Systems via Semi-Parametric Gaussian Regression Accurate radio maps will be very much needed to provide environmental awareness and effectively manage future wireless networks. Most of the research so far has focused on developing power mapping algorithms for single and omnidirectional antenna systems. In this letter, we investigate the construction of crowdsourcing-based radio maps for 5G cellular systems with massive directional antenna arrays (spatial multiplexing), proposing an original technique based on semi-parametric Gaussian regression. The proposed method is <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">model-free</i> and provides highly accurate estimates of the radio maps, outperforming fully parametric and non-parametric solutions.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Accelerated Evaluation of Automated Vehicles in Car-Following Maneuvers. The safety of automated vehicles (AVs) must be assured before their release and deployment. The current approach to evaluation relies primarily on 1) testing AVs on public roads or 2) track testing with scenarios defined in a test matrix. These two methods have completely opposing drawbacks: the former, while offering realistic scenarios, takes too much time to execute and the latter, though it ca...
Adaptive generation of challenging scenarios for testing and evaluation of autonomous vehicles. •A novel framework for generating test cases for autonomous vehicles is proposed.•Adaptive sampling significantly reduces the number of simulations required.•Adjacency clustering identifies performance boundaries of the system.•Approach successfully applied to complex unmanned underwater vehicle missions.
Requirements-driven Test Generation for Autonomous Vehicles with Machine Learning Components Autonomous vehicles are complex systems that are challenging to test and debug. A requirements-driven approach to the development process can decrease the resources required to design and test these systems, while simultaneously increasing the reliability. We present a testing framework that uses signal temporal logic (STL), which is a precise and unambiguous requirements language. Our framework e...
Using Ontology-Based Traffic Models for More Efficient Decision Making of Autonomous Vehicles The paper describes how a high-level abstract world model can be used to support the decision-making process of an autonomous driving system. The approach uses a hierarchical world model and distinguishes between a low-level model for the trajectory planning and a high-level model for solving the traffic coordination problem. The abstract world model used in the CyberCars-2 project is presented. It is based on a topological lane segmentation and introduces relations to represent the semantic context of the traffic scenario. This makes it much easier to realize a consistent and complete driving control system, and to analyze, evaluate and simulate such a system.
Extracting Traffic Primitives Directly from Naturalistically Logged Data for Self-Driving Applications. Developing an automated vehicle, that can handle complicated driving scenarios and appropriately interact with other road users, requires the ability to semantically learn and understand driving environment, oftentimes, based on analyzing massive amounts of naturalistic driving data. An important paradigm that allows automated vehicles to both learn from human drivers and gain insights is understa...
Using Ontologies for Test Suites Generation for Automated and Autonomous Driving Functions In this paper, we outline a general automated testing approach to be applied for verification and validation of automated and autonomous driving functions. The approach makes use of ontologies of environment the system under test is interacting with. Ontologies are automatically converted into input models for combinatorial testing, which are used to generate test cases. The obtained abstract test cases are used to generate concrete test scenarios that provide the basis for simulation used to verify the functionality of the system under test. We discuss the general approach including its potential for automation in the automotive domain where there is growing need for sophisticated verification based on simulation in case of automated and autonomous vehicles.
Adversarial Evaluation of Autonomous Vehicles in Lane-Change Scenarios Autonomous vehicles must be comprehensively evaluated before deployed in cities and highways. However, most existing evaluation approaches for autonomous vehicles are static and lack adaptability, so they are usually inefficient in generating challenging scenarios for tested vehicles. In this paper, we propose an adaptive evaluation framework to efficiently evaluate autonomous vehicles in adversarial environments generated by deep reinforcement learning. Considering the multimodal nature of dangerous scenarios, we use ensemble models to represent different local optimums for diversity. We then utilize a nonparametric Bayesian method to cluster the adversarial policies. The proposed method is validated in a typical lane-change scenario that involves frequent interactions between the ego vehicle and the surrounding vehicles. Results show that the adversarial scenarios generated by our method significantly degrade the performance of the tested vehicles. We also illustrate different patterns of generated adversarial environments, which can be used to infer the weaknesses of the tested vehicles.
Human-Like Decision Making for Autonomous Driving: A Noncooperative Game Theoretic Approach Considering that human-driven vehicles and autonomous vehicles (AVs) will coexist on roads in the future for a long time, how to merge AVs into human drivers' traffic ecology and minimize the effect of AVs and their misfit with human drivers, are issues worthy of consideration. Moreover, different passengers have different needs for AVs, thus, how to provide personalized choices for different passengers is another issue for AVs. Therefore, a human-like decision making framework is designed for AVs in this paper. Different driving styles and social interaction characteristics are formulated for AVs regarding driving safety, ride comfort and travel efficiency, which are considered in the modeling process of decision making. Then, Nash equilibrium and Stackelberg game theory are applied to the noncooperative decision making. In addition, potential field method and model predictive control (MPC) are combined to deal with the motion prediction and planning for AVs, which provides predicted motion information for the decision-making module. Finally, two typical testing scenarios of lane change, i.e., merging and overtaking, are carried out to evaluate the feasibility and effectiveness of the proposed decision-making framework considering different human-like behaviors. Testing results indicate that both the two game theoretic approaches can provide reasonable human-like decision making for AVs. Compared with the Nash equilibrium approach, under the normal driving style, the cost value of decision making using the Stackelberg game theoretic approach is reduced by over 20%.
Predictive Haptic Feedback for Obstacle Avoidance Based on Model Predictive Control New sensing and steering technologies enable safety systems that work with the driver to ensure a safe and collision-free vehicle trajectory using a shared control approach. These shared control systems must constantly balance the sometimes competing objectives of following the driver’s command and maintaining a feasible trajectory for the vehicle. This paper presents a novel technique for creating haptic steering feedback based on a prediction of the system’s need to intervene in the future. This feedback mirrors the tension between the two controller objectives of following the driver and maintaining a feasible path. The paper uses simulation and experiment to investigate the impact of varying the prediction horizon on system performance. A novel in-vehicle driver study based on decoupling visual and haptic cues demonstrates that this feedback provides a statistically significant improvement in response time and reduced time to collision (TTC) in an obstacle avoidance task.
Constraint-handling in nature-inspired numerical optimization: Past, present and future. In their original versions, nature-inspired search algorithms such as evolutionary algorithms and those based on swarm intelligence, lack a mechanism to deal with the constraints of a numerical optimization problem. Nowadays, however, there exists a considerable amount of research devoted to design techniques for handling constraints within a nature-inspired algorithm. This paper presents an analysis of the most relevant types of constraint-handling techniques that have been adopted with nature-inspired algorithms. From them, the most popular approaches are analyzed in more detail. For each of them, some representative instantiations are further discussed. In the last part of the paper, some of the future trends in the area, which have been only scarcely explored, are briefly discussed and then the conclusions of this paper are presented.
Multiresolution Spatial and Temporal Coding in a Wireless Sensor Network for Long-Term Monitoring Applications In many WSN (wireless sensor network) applications, such as [1], [2], [3], the targets are to provide long-term monitoring of environments. In such applications, energy is a primary concern because sensor nodes have to regularly report data to the sink and need to continuously work for a very long time so that users may periodically request a rough overview of the monitored environment. On the other hand, users may occasionally query more in-depth data of certain areas to analyze abnormal events. These requirements motivate us to propose a multiresolution compression and query (MRCQ) framework to support in-network data compression and data storage in WSNs from both space and time domains. Our MRCQ framework can organize sensor nodes hierarchically and establish multiresolution summaries of sensing data inside the network, through spatial and temporal compressions. In the space domain, only lower resolution summaries are sent to the sink; the other higher resolution summaries are stored in the network and can be obtained via queries. In the time domain, historical data stored in sensor nodes exhibit a finer resolution for more recent data, and a coarser resolution for older data. Our methods consider the hardware limitations of sensor nodes. So, the result is expected to save sensors' energy significantly, and thus, can support long-term monitoring WSN applications. A prototyping system is developed to verify its feasibility. Simulation results also show the efficiency of MRCQ compared to existing work.
Development of Recurrent Neural Network Considering Temporal-Spatial Input Dynamics for Freeway Travel Time Modeling AbstractAbstract:ï źThe artificial neural network ANN is one advance approach to freeway travel time prediction. Various studies using different inputs have come to no consensus on the effects of input selections. In addition, very little discussion has been made on the temporal-spatial aspect of the ANN travel time prediction process. In this study, we employ an ANN ensemble technique to analyze the effects of various input settings on the ANN prediction performances. Volume, occupancy, and speed are used as inputs to predict travel times. The predictions are then compared against the travel times collected from the toll collection system in Houston. The results show speed or occupancy measured at the segment of interest may be used as sole input to produce acceptable predictions, but all three variables together tend to yield the best prediction results. The inclusion of inputs from both upstream and downstream segments is statistically better than using only the inputs from current segment. It also appears that the magnitude of prevailing segment travel time can be used as a guideline to set up temporal input delays for better prediction accuracies. The evaluation of spatiotemporal input interactions reveals that past information on downstream and current segments is useful in improving prediction accuracy whereas past inputs from the upstream location do not provide as much constructive information. Finally, a variant of the state-space model SSNN, namely time-delayed state-space neural network TDSSNN, is proposed and compared against other popular ANN models. The comparison shows that the TDSSNN outperforms other networks and remains very comparable with the SSNN. Future research is needed to analyze TDSSNN's ability in corridor prediction settings.
Computing Urban Traffic Congestions by Incorporating Sparse GPS Probe Data and Social Media Data Estimating urban traffic conditions of an arterial network with GPS probe data is a practically important while substantially challenging problem, and has attracted increasing research interests recently. Although GPS probe data is becoming a ubiquitous data source for various traffic related applications currently, they are usually insufficient for fully estimating traffic conditions of a large arterial network due to the low sampling frequency. To explore other data sources for more effectively computing urban traffic conditions, we propose to collect various traffic events such as traffic accident and jam from social media as complementary information. In addition, to further explore other factors that might affect traffic conditions, we also extract rich auxiliary information including social events, road features, Point of Interest (POI), and weather. With the enriched traffic data and auxiliary information collected from different sources, we first study the traffic co-congestion pattern mining problem with the aim of discovering which road segments geographically close to each other are likely to co-occur traffic congestion. A search tree based approach is proposed to efficiently discover the co-congestion patterns. These patterns are then used to help estimate traffic congestions and detect anomalies in a transportation network. To fuse the multisourced data, we finally propose a coupled matrix and tensor factorization model named TCE_R to more accurately complete the sparse traffic congestion matrix by collaboratively factorizing it with other matrices and tensors formed by other data. We evaluate the proposed model on the arterial network of downtown Chicago with 1,257 road segments whose total length is nearly 700 miles. The results demonstrate the superior performance of TCE_R by comprehensive comparison with existing approaches.
Enhanced Coordinated Operations of Electric Power and Transportation Networks via EV Charging Services Electric power and transportation networks become increasingly coupled through electric vehicles (EV) charging station (EVCS) as the penetration of EVs continues to grow. In this paper, we propose a holistic framework to enhance the operation of coordinated electric power distribution network (PDN) and urban transportation network (UTN) via EV charging services. Under this framework, a bi-level mo...
1.07558
0.075556
0.075556
0.066667
0.066667
0.066667
0.066667
0.033333
0.000111
0
0
0
0
0
Actuator Failure Compensation-Based Adaptive Control of Active Suspension Systems with Prescribed Performance In this article, we study the control problem of the vehicle active suspension systems (ASSs) subject to actuator failure. An adaptive control scheme is presented to stabilize the vertical displacement of the car-body. Meanwhile, the ride comfort, road holding, and suspension space limitation can be guaranteed. In order to overcome the uncertainty, the neural network is developed to approximate th...
Network-Induced Constraints in Networked Control Systems—A Survey Networked control systems (NCSs) have, in recent years, brought many innovative impacts to control systems. However, great challenges are also met due to the network-induced imperfections. Such network-induced imperfections are handled as various constraints, which should appropriately be considered in the analysis and design of NCSs. In this paper, the main methodologies suggested in the literature to cope with typical network-induced constraints, namely time delays, packet losses and disorder, time-varying transmission intervals, competition of multiple nodes accessing networks, and data quantization are surveyed; the constraints suggested in the literature on the first two types of constraints are updated in different categorizing ways; and those on the latter three types of constraints are extended.
Gateway Framework for In-Vehicle Networks based on CAN, FlexRay and Ethernet This paper proposes a gateway framework for in-vehicle networks based on CAN, FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify, in order to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface. The gateway framework provides state of the art functionalities that include parallel reprogramming, diagnostic routing, network management, dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.
Adaptive Parameter Estimation and Control Design for Robot Manipulators With Finite-Time Convergence. For parameter identifications of robot systems, most existing works have focused on the estimation veracity, but few works of literature are concerned with the convergence speed. In this paper, we developed a robot control/identification scheme to identify the unknown robot kinematic and dynamic parameters with enhanced convergence rate. Superior to the traditional methods, the information of para...
Finite-Time H ∞ Estimator Design for Switched Discrete-Time Delayed Neural Networks With Event-Triggered Strategy This article is concerned with the event-triggered finite-time $H_{\infty }$ estimator design for a class of discrete-time switched neural networks (SNNs) with mixed time delays and packet dropouts. To further reduce the data transmission, both the measured information of system outputs and switching signal of the SNNs are on...
An Overview of Recent Advances in Event-Triggered Consensus of Multiagent Systems. Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement upon a common quantity of interest while significantly alleviating utilization of communication and computation resources. This paper aims to provide an overview of recent advances in e...
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Revenue-optimal task scheduling and resource management for IoT batch jobs in mobile edge computing With the growing prevalence of Internet of Things (IoT) devices and technology, a burgeoning computing paradigm namely mobile edge computing (MEC) is delicately proposed and designed to accommodate the application requirements of IoT scenario. In this paper, we focus on the problems of dynamic task scheduling and resource management in MEC environment, with the specific objective of achieving the optimal revenue earned by edge service providers. While the majority of task scheduling and resource management algorithms are formulated by an integer programming (IP) problem and solved in a dispreferred NP-hard manner, we innovatively investigate the problem structure and identify a favorable property namely totally unimodular constraints. The totally unimodular property further helps to design an equivalent linear programming (LP) problem which can be efficiently and elegantly solved at polynomial computational complexity. In order to evaluate our proposed approach, we conduct simulations based on real-life IoT dataset to verify the effectiveness and efficiency of our approach.
Efficient k-out-of-n oblivious transfer schemes with adaptive and non-adaptive queries In this paper we propose efficient two-round k-out-of-n oblivious transfer schemes, in which R sends O(k) messages to S, and S sends O(n) messages back to R. The computation cost of R and S is reasonable. The choices of R are unconditionally secure. For the basic scheme, the secrecy of unchosen messages is guaranteed if the Decisional Diffie-Hellman problem is hard. When k=1, our basic scheme is as efficient as the most efficient 1-out-of-n oblivious transfer scheme. Our schemes have the nice property of universal parameters, that is each pair of R and S need neither hold any secret key nor perform any prior setup (initialization). The system parameters can be used by all senders and receivers without any trapdoor specification. Our k-out-of-n oblivious transfer schemes are the most efficient ones in terms of the communication cost, in both rounds and the number of messages. Moreover, one of our schemes can be extended in a straightforward way to an adaptivek-out-of-n oblivious transfer scheme, which allows the receiver R to choose the messages one by one adaptively. In our adaptive-query scheme, S sends O(n) messages to R in one round in the commitment phase. For each query of R, only O(1) messages are exchanged and O(1) operations are performed. In fact, the number k of queries need not be pre-fixed or known beforehand. This makes our scheme highly flexible.
The concept of flow in collaborative game-based learning Generally, high-school students have been characterized as bored and disengaged from the learning process. However, certain educational designs promote excitement and engagement. Game-based learning is assumed to be such a design. In this study, the concept of flow is used as a framework to investigate student engagement in the process of gaming and to explain effects on game performance and student learning outcome. Frequency 1550, a game about medieval Amsterdam merging digital and urban play spaces, has been examined as an exemplar of game-based learning. This 1-day game was played in teams by 216 students of three schools for secondary education in Amsterdam. Generally, these students show flow with their game activities, although they were distracted by solving problems in technology and navigation. Flow was shown to have an effect on their game performance, but not on their learning outcome. Distractive activities and being occupied with competition between teams did show an effect on the learning outcome of students: the fewer students were distracted from the game and the more they were engaged in group competition, the more students learned about the medieval history of Amsterdam. Consequences for the design of game-based learning in secondary education are discussed.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.1
0.1
0.1
0.1
0.1
0.025
0
0
0
0
0
0
0
0
Short-Term Estimation and Prediction of Pedestrian Density in Urban Hot Spots Based on Mobile Phone Data Short-term estimation and prediction of pedestrian density in urban hot spots (e.g., railway station, shopping mall, etc.) is an important topic for traffic management and control in densely populated areas. In this paper, we propose a short-term pedestrian density estimation and prediction method based on mobile phone data. Firstly, pedestrian density in hot spots is estimated using mobile phone data. To decrease the positioning errors of mobile phone data, a modified particle filter method, which considers the movements of pedestrians, is applied for pre-processing the data. An efficient spatial access method (i.e., Hilbert R-tree) is adopted to construct pedestrians’ position indexes for realizing the short-term estimation. Secondly, based on the estimation results, the spatiotemporal extended Kalman filter (SEKF) is proposed for the short-term prediction of pedestrian density. A massive mobile phone dataset collected in Nanjing, China is used in the case study. The estimated pedestrian density from Monday to Thursday is used for pedestrian density prediction on Friday. The results show that the proposed method can estimate and predict pedestrian density in hot spots, especially in small-scale sites of hot spots efficiently in a short time. Comparing with classical prediction methods, the proposed SEKF method predicts short-term pedestrian density in urban hot spots more accurately.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
CISSKA-LSB: color image steganography using stego key-directed adaptive LSB substitution method. Information hiding is an active area of research where secret information is embedded in innocent-looking carriers such as images and videos for hiding its existence while maintaining their visual quality. Researchers have presented various image steganographic techniques since the last decade, focusing on payload and image quality. However, there is a trade-off between these two metrics and keeping a better balance between them is still a challenging issue. In addition, the existing methods fail to achieve better security due to direct embedding of secret data inside images without encryption consideration, making data extraction relatively easy for adversaries. Therefore, in this work, we propose a secure image steganographic framework based on stego key-directed adaptive least significant bit (SKA-LSB) substitution method and multi-level cryptography. In the proposed scheme, stego key is encrypted using a two-level encryption algorithm (TLEA); secret data is encrypted using a multi-level encryption algorithm (MLEA), and the encrypted information is then embedded in the host image using an adaptive LSB substitution method, depending on secret key, red channel, MLEA, and sensitive contents. The quantitative and qualitative experimental results indicate that the proposed framework maintains a better balance between image quality and security, achieving a reasonable payload with relatively less computational complexity, which confirms its effectiveness compared to other state-of-the-art techniques.
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
Genetic Optimization Of Radial Basis Probabilistic Neural Networks This paper discusses using genetic algorithms (CA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed CA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed CA is also suitable for optimizing the radial basis function neural networks (RBFNN).
Current status and key issues in image steganography: A survey. Steganography and steganalysis are the prominent research fields in information hiding paradigm. Steganography is the science of invisible communication while steganalysis is the detection of steganography. Steganography means “covered writing” that hides the existence of the message itself. Digital steganography provides potential for private and secure communication that has become the necessity of most of the applications in today’s world. Various multimedia carriers such as audio, text, video, image can act as cover media to carry secret information. In this paper, we have focused only on image steganography. This article provides a review of fundamental concepts, evaluation measures and security aspects of steganography system, various spatial and transform domain embedding schemes. In addition, image quality metrics that can be used for evaluation of stego images and cover selection measures that provide additional security to embedding scheme are also highlighted. Current research trends and directions to improve on existing methods are suggested.
Hybrid local and global descriptor enhanced with colour information. Feature extraction is one of the most important steps in computer vision tasks such as object recognition, image retrieval and image classification. It describes an image by a set of descriptors where the best one gives a high quality description and a low computation. In this study, the authors propose a novel descriptor called histogram of local and global features using speeded up robust featur...
Secure visual cryptography for medical image using modified cuckoo search. Optimal secure visual cryptography for brain MRI medical image is proposed in this paper. Initially, the brain MRI images are selected and then discrete wavelet transform is applied to the brain MRI image for partitioning the image into blocks. Then Gaussian based cuckoo search algorithm is utilized to select the optimal position for every block. Next the proposed technique creates the dual shares from the secret image. Then the secret shares are embedded in the corresponding positions of the blocks. After embedding, the extraction operation is carried out. Here visual cryptographic design is used for the purpose of image authentication and verification. The extracted secret image has dual shares, based on that the receiver views the input image. The authentication and verification of medical image are assisted with the help of target database. All the secret images are registered previously in the target database. The performance of the proposed method is estimated by Peak Signal to Noise Ratio (PSNR), Mean square error (MSE) and normalized correlation. The implementation is done by MATLAB platform.
Digital watermarking techniques for image security: a review Multimedia technology usages is increasing day by day and to provide authorized data and protecting the secret information from unauthorized use is highly difficult and involves a complex process. By using the watermarking technique, only authorized user can use the data. Digital watermarking is a widely used technology for the protection of digital data. Digital watermarking deals with the embedding of secret data into actual information. Digital watermarking techniques are classified into three major categories, and they were based on domain, type of document (text, image, music or video) and human perception. Performance of the watermarked images is analysed using Peak signal to noise ratio, mean square error and bit error rate. Watermarking of images has been researched profoundly for its specialized and modern achievability in all media applications such as copyrights protection, medical reports (MRI scan and X-ray), annotation and privacy control. This paper reviews the watermarking technique and its merits and demerits.
Orthogonal moments based on exponent functions: Exponent-Fourier moments. In this paper, we propose a new set of orthogonal moments based on Exponent functions, named Exponent-Fourier moments (EFMs), which are suitable for image analysis and rotation invariant pattern recognition. Compared with Zernike polynomials of the same degree, the new radial functions have more zeros, and these zeros are evenly distributed, this property make EFMs have strong ability in describing image. Unlike Zernike moments, the kernel of computation of EFMs is extremely simple. Theoretical and experimental results show that Exponent-Fourier moments perform very well in terms of image reconstruction capability and invariant recognition accuracy in noise-free, noisy and smooth distortion conditions. The Exponent-Fourier moments can be thought of as generalized orthogonal complex moments.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Interference Alignment and Degrees of Freedom of the K-User Interference Channel For the fully connected K user wireless interference channel where the channel coefficients are time-varying and are drawn from a continuous distribution, the sum capacity is characterized as C(SNR)=K/2log(SNR)+o(log(SNR)) . Thus, the K user time-varying interference channel almost surely has K/2 degrees of freedom. Achievability is based on the idea of interference alignment. Examples are also pr...
Load Scheduling and Dispatch for Aggregators of Plug-In Electric Vehicles This paper proposes an operating framework for aggregators of plug-in electric vehicles (PEVs). First, a minimum-cost load scheduling algorithm is designed, which determines the purchase of energy in the day-ahead market based on the forecast electricity price and PEV power demands. The same algorithm is applicable for negotiating bilateral contracts. Second, a dynamic dispatch algorithm is developed, used for distributing the purchased energy to PEVs on the operating day. Simulation results are used to evaluate the proposed algorithms, and to demonstrate the potential impact of an aggregated PEV fleet on the power system.
Online Coordinated Charging Decision Algorithm for Electric Vehicles Without Future Information The large-scale integration of plug-in electric vehicles (PEVs) to the power grid spurs the need for efficient charging coordination mechanisms. It can be shown that the optimal charging schedule smooths out the energy consumption over time so as to minimize the total energy cost. In practice, however, it is hard to smooth out the energy consumption perfectly, because the future PEV charging demand is unknown at the moment when the charging rate of an existing PEV needs to be determined. In this paper, we propose an online coordinated charging decision (ORCHARD) algorithm, which minimizes the energy cost without knowing the future information. Through rigorous proof, we show that ORCHARD is strictly feasible in the sense that it guarantees to fulfill all charging demands before due time. Meanwhile, it achieves the best known competitive ratio of 2.39. By exploiting the problem structure, we propose a novel reduced-complexity algorithm to replace the standard convex optimization techniques used in ORCHARD. Through extensive simulations, we show that the average performance gap between ORCHARD and the offline optimal solution, which utilizes the complete future information, is as small as 6.5%. By setting a proper speeding factor, the average performance gap can be further reduced to 5%.
Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.057167
0.05
0.05
0.05
0.05
0.05
0.05
0.018575
0
0
0
0
0
0
Energy-Efficient UAV Communication With Trajectory Optimization. Wireless communication with unmanned aerial vehicles (UAVs) is a promising technology for future communication systems. In this paper, assuming that the UAV flies horizontally with a fixed altitude, we study energy-efficient UAV communication with a ground terminal via optimizing the UAV&#39;s trajectory, a new design paradigm that jointly considers both the communication throughput and the UAV&#39;s ener...
QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning The Internet of vehicles (IoV) is a large information interaction network that collects information on vehicles, roads and pedestrians. One of the important uses of vehicle networks is to meet the entertainment needs of driving users through communication between vehicles and roadside units (RSUs). Due to the limited storage space of RSUs, determining the content cached in each RSU is a key challenge. With the development of 5G and video editing technology, short video systems have become increasingly popular. Current widely used cache update methods, such as partial file precaching and content popularity- and user interest-based determination, are inefficient for such systems. To solve this problem, this paper proposes a QoE-driven edge caching method for the IoV based on deep reinforcement learning. First, a class-based user interest model is established. Compared with the traditional file popularity- and user interest distribution-based cache update methods, the proposed method is more suitable for systems with a large number of small files. Second, a quality of experience (QoE)-driven RSU cache model is established based on the proposed class-based user interest model. Third, a deep reinforcement learning method is designed to address the QoE-driven RSU cache update issue effectively. The experimental results verify the effectiveness of the proposed algorithm.
Multi-Hop Cooperative Computation Offloading for Industrial IoT–Edge–Cloud Computing Environments The concept of the industrial Internet of things (IIoT) is being widely applied to service provisioning in many domains, including smart healthcare, intelligent transportation, autopilot, and the smart grid. However, because of the IIoT devices’ limited onboard resources, supporting resource-intensive applications, such as 3D sensing, navigation, AI processing, and big-data analytics, remains a challenging task. In this paper, we study the multi-hop computation-offloading problem for the IIoT–edge–cloud computing model and adopt a game-theoretic approach to achieving Quality of service (QoS)-aware computation offloading in a distributed manner. First, we study the computation-offloading and communication-routing problems with the goal of minimizing each task's computation time and energy consumption, formulating the joint problem as a potential game in which the IIoT devices determine their computation-offloading strategies. Second, we apply a free–bound mechanism that can ensure a finite improvement path to a Nash equilibrium. Third, we propose a multi-hop cooperative-messaging mechanism and develop two QoS-aware distributed algorithms that can achieve the Nash equilibrium. Our simulation results show that our algorithms offer a stable performance gain for IIoT in various scenarios and scale well as the device size increases.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Placing Virtual Machines to Optimize Cloud Gaming Experience Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider's total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Coverage Protocols for Wireless Sensor Networks: Review and Future Directions. The coverage problem in wireless sensor networks (WSNs) can be generally defined as a measure of how effectively a network field is monitored by its sensor nodes. This problem has attracted a lot of interest over the years and as a result, many coverage protocols were proposed. In this survey, we first propose a taxonomy for classifying coverage protocols in WSNs. Then, we classify the coverage protocols into three categories (i.e., coverage-aware deployment protocols, sleep scheduling protocols for flat networks, and cluster-based sleep scheduling protocols) based on the network stage where the coverage is optimized. For each category, relevant protocols are thoroughly reviewed and classified based on the adopted coverage techniques. Finally, we discuss open issues (and recommend future directions to resolve them) associated with the design of realistic coverage protocols. Issues such as realistic sensing models, realistic energy consumption models, realistic connectivity models and sensor localization are covered.
TD-EUA - Task-Decomposable Edge User Allocation with QoE Optimization.
Optimal Edge User Allocation in Edge Computing with Variable Sized Vector Bin Packing. In mobile edge computing, edge servers are geographically distributed around base stations placed near end-users to provide highly accessible and efficient computing capacities and services. In the mobile edge computing environment, a service provider can deploy its service on hired edge servers to reduce end-to-end service delays experienced by its end-users allocated to those edge servers. An optimal deployment must maximize the number of allocated end-users and minimize the number of hired edge servers while ensuring the required quality of service for end-users. In this paper, we model the edge user allocation (EUA) problem as a bin packing problem, and introduce a novel, optimal approach to solving the EUA problem based on the Lexicographic Goal Programming technique. We have conducted three series of experiments to evaluate the proposed approach against two representative baseline approaches. Experimental results show that our approach significantly outperforms the other two approaches.
A vector-perturbation technique for near-capacity multiantenna multiuser communication-part I: channel inversion and regularization Recent theoretical results describing the sum capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum rates of tens of bits/channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the power of the transmitted signal. This work is comprised of two parts. In this first part, we show that while the sum capacity grows linearly with the minimum of the number of antennas and users, the sum rate of channel inversion does not. This poor performance is due to the large spread in the singular values of the channel matrix. We introduce regularization to improve the condition of the inverse and maximize the signal-to-interference-plus-noise ratio at the receivers. Regularization enables linear growth and works especially well at low signal-to-noise ratios (SNRs), but as we show in the second part, an additional step is needed to achieve near-capacity performance at all SNRs.
A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms The interest in nonparametric statistical analysis has grown recently in the field of computational intelligence. In many experimental studies, the lack of the required properties for a proper application of parametric procedures–independence, normality, and homoscedasticity–yields to nonparametric ones the task of performing a rigorous comparison among algorithms.
Connectedness Preserving Distributed Swarm Aggregation for Multiple Kinematic Robots A distributed swarm aggregation algorithm is developed for a team of multiple kinematic agents. Specifically, each agent is assigned a control law, which is the sum of two elements: a repulsive potential field, which is responsible for the collision avoidance objective, and an attractive potential field, which forces the agents to converge to a configuration where they are close to each other. Furthermore, the attractive potential field forces the agents that are initially located within the sensing radius of an agent to remain within this area for all time. In this way, the connectivity properties of the initially formed communication graph are rendered invariant for the trajectories of the closed-loop system. It is shown that under the proposed control law, agents converge to a configuration where each agent is located at a bounded distance from each of its neighbors. The results are also extended to the case of nonholonomic kinematic unicycle-type agents and to the case of dynamic edge addition. In the latter case, we derive a smaller bound in the swarm size than in the static case.
Joint Optimization of Source Precoding and Relay Beamforming in Wireless MIMO Relay Networks. This paper considers joint linear processing at multi-antenna sources and one multiple-input multiple-output (MIMO) relay station for both one-way and two-way relay-assisted wireless communications. The one-way relaying is applicable in the scenario of downlink transmission by a multi-antenna base station to multiple single-antenna users with the help of one MIMO relay. In such a scenario, the objective of join linear processing is to maximize the information throughput to users. The design problem is equivalently formulated as the maximization of the worst signal-to-interference-plus-noise ratio (SINR) among all users subject to various transmission power constraints. Such a program of nonconvex objective minimization under nonconvex constraints is transformed to a canonical d.c. (difference of convex functions/sets) program of d.c. function optimization under convex constraints through nonconvex duality with zero duality gap. An efficient iterative algorithm is then applied to solve this canonical d.c program. For the scenario of using one MIMO relay to assist two sources exchanging their information in two-way relying manner, the joint linear processing aims at either minimizing the maximum mean square error (MSE) or maximizing the total information throughput of the two sources. By applying tractable optimization for the linear minimum MSE estimator and d.c. programming, an iterative algorithm is developed to solve these two optimization problems. Extensive simulation results demonstrate that the proposed methods substantially outperform previously-known joint optimization methods.
Explanations and Expectations: Trust Building in Automated Vehicles. Trust is a vital determinant of acceptance of automated vehicles (AVs) and expectations and explanations are often at the heart of any trusting relationship. Once expectations have been violated, explanations are needed to mitigate the damage. This study introduces the importance of timing of explanations in promoting trust in AVs. We present the preliminary results of a within-subjects experimental study involving eight participants exposed to four AV driving conditions (i.e. 32 data points). Preliminary results show a pattern that suggests that explanations provided before the AV takes actions promote more trust than explanations provided afterward.
Higher Order Tensor Decomposition For Proportional Myoelectric Control Based On Muscle Synergies Muscle synergies have recently been utilised in myoelectric control systems. Thus far, all proposed synergy-based systems rely on matrix factorisation methods. However, this is limited in terms of task-dimensionality. Here, the potential application of higher-order tensor decomposition as a framework for proportional myoelectric control is demonstrated. A novel constrained Tucker decomposition (consTD) technique of synergy extraction is proposed for synergy-based myoelectric control model and compared with state-of-the-art matrix factorisation models. The extracted synergies were used to estimate control signals for the wrist?s Degree of Freedom (DoF) through direct projection. The consTD model was able to estimate the control signals for each DoF by utilising all data in one 3rd-order tensor. This is contrast with matrix factorisation models where data are segmented for each DoF and then the synergies often have to be realigned. Moreover, the consTD method offers more information by providing additional shared synergies, unlike matrix factorisation methods. The extracted control signals were fed to a ridge regression to estimate the wrist's kinematics based on real glove data. The Coefficient of Determination (R-2) for the reconstructed wrist position showed that the proposed consTD was higher than matrix factorisation methods. In sum, this study provides the first proof of concept for the use of higher-order tensor decomposition in proportional myoelectric control and it highlights the potential of tensors to provide an objective and direct approach to identify synergies.
1.02121
0.02
0.02
0.02
0.02
0.02
0.02
0.01
0.000006
0
0
0
0
0
Adaptive Backstepping Control of Nonlinear Uncertain Systems with Quantized States This paper investigates the stabilization problem for uncertain nonlinear systems with quantized states. All states in the system are quantized by a static bounded quantizer, including uniform quantizer, hysteresis-uniform quantizer, and logarithmic-uniform quantizer as examples. An adaptive backstepping-based control algorithm, which can handle discontinuity, resulted from the state quantization ...
Adaptive Neural Quantized Control for a Class of MIMO Switched Nonlinear Systems With Asymmetric Actuator Dead-Zone. This paper concentrates on the adaptive state-feedback quantized control problem for a class of multiple-input-multiple-output (MIMO) switched nonlinear systems with unknown asymmetric actuator dead-zone. In this study, we employ different quantizers for different subsystem inputs. The main challenge of this study is to deal with the coupling between the quantizers and the dead-zone nonlinearities...
Observer-based Fuzzy Adaptive Inverse Optimal Output Feedback Control for Uncertain Nonlinear Systems In this article, an observer-based fuzzy adaptive inverse optimal output feedback control problem is studied for a class of nonlinear systems in strict-feedback form. The considered nonlinear systems contain unknown nonlinear dynamics and their states are not measured directly. Fuzzy logic systems are applied to identify the unknown nonlinear dynamics and an auxiliary nonlinear system is construct...
Control of nonlinear systems under dynamic constraints: A unified barrier function-based approach. Although there are fruitful results on adaptive control of constrained parametric/nonparametric strict-feedback nonlinear systems, most of them are contingent upon “feasibility conditions”, and/or are only applicable to constant and symmetric constraints. In this work, we present a robust adaptive control solution free from “feasibility conditions” and capable of accommodating much more general dynamic constraints. In our design, instead of employing the commonly used piecewise Barrier Lyapunov Function (BLF), we build a unified barrier function upon the constrained states, with which we convert the original constrained nonlinear system into an equivalent “non-constrained” one. Then by stabilizing the “unconstrained” system, the asymmetric state constraints imposed dynamically are handled gracefully. By blending a new coordinate transformation into the backstepping design, we develop a control strategy completely obviating the “feasibility conditions” for the system. It is worth noting that the requirement on the constraints to be obeyed herein is much less restrictive as compared to those imposed in most existing methods, rendering the resultant control less demanding in design and more user-friendly in implementation. Both theoretical analysis and numerical simulation verify the effectiveness and benefits of the proposed method.
Adaptive Asymptotic Tracking With Global Performance for Nonlinear Systems With Unknown Control Directions This article presents a global adaptive asymptotic tracking control method, capable of guaranteeing prescribed transient behavior for uncertain strict-feedback nonlinear systems with arbitrary relative degree and unknown control directions. Unlike most existing funnel controls that are built upon time-varying feedback gains, the proposed method is derived from a tracking error-dependent normalized...
Event-Triggered Fuzzy Bipartite Tracking Control for Network Systems Based on Distributed Reduced-Order Observers This article studies the distributed observer-based event-triggered bipartite tracking control problem for stochastic nonlinear multiagent systems with input saturation. First, different from conventional observers, we construct a novel distributed reduced-order observer to estimate unknown states for the stochastic nonlinear systems. Then, an event-triggered mechanism with relative threshold is i...
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Collaborative privacy management The landscape of the World Wide Web with all its versatile services heavily relies on the disclosure of private user information. Unfortunately, the growing amount of personal data collected by service providers poses a significant privacy threat for Internet users. Targeting growing privacy concerns of users, privacy-enhancing technologies emerged. One goal of these technologies is the provision of tools that facilitate a more informative decision about personal data disclosures. A famous PET representative is the PRIME project that aims for a holistic privacy-enhancing identity management system. However, approaches like the PRIME privacy architecture require service providers to change their server infrastructure and add specific privacy-enhancing components. In the near future, service providers are not expected to alter internal processes. Addressing the dependency on service providers, this paper introduces a user-centric privacy architecture that enables the provider-independent protection of personal data. A central component of the proposed privacy infrastructure is an online privacy community, which facilitates the open exchange of privacy-related information about service providers. We characterize the benefits and the potentials of our proposed solution and evaluate a prototypical implementation.
On controller initialization in multivariable switching systems We consider a class of switched systems which consists of a linear MIMO and possibly unstable process in feedback interconnection with a multicontroller whose dynamics switch. It is shown how one can achieve significantly better transient performance by selecting the initial condition for every controller when it is inserted into the feedback loop. This initialization is obtained by performing the minimization of a quadratic cost function of the tracking error, controlled output, and control signal. We guarantee input-to-state stability of the closed-loop system when the average number of switches per unit of time is smaller than a specific value. If this is not the case then stability can still be achieved by adding a mild constraint to the optimization. We illustrate the use of our results in the control of a flexible beam actuated in torque. This system is unstable with two poles at the origin and contains several lightly damped modes, which can be easily excited by controller switching.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A robust medical image watermarking against salt and pepper noise for brain MRI images. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. During the transmission of medical images between hospitals or specialists through the network, the main priority is to protect a patient's documents against any act of tampering by unauthorised individuals. Because of this, there is a need for medical image authentication scheme to enable proper diagnosis on patient. In addition, medical images are also susceptible to salt and pepper impulse noise through the transmission in communication channels. This noise may also be intentionally used by the invaders to corrupt the embedded watermarks inside the medical images. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this work addresses the issue of designing a new watermarking method that can withstand high density of salt and pepper noise for brain MRI images. For this purpose, combination of a spatial domain watermarking method, channel coding and noise filtering schemes are used. The region of non-interest (RONI) of MRI images from five different databases are used as embedding area and electronic patient record (EPR) is considered as embedded data. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER).
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
0
0
Multi-stream CNN: Learning representations based on human-related regions for action recognition. •Presenting a multi-stream CNN architecture to incorporate multiple complementary features trained in appearance and motion networks.•Demonstrating that using full-frame, human body, and motion-salient body part regions together is effective to improve recognition performance.•Proposing methods to detect the actor and motion-salient body part precisely.•Verifying that high-quality flow is critically important to learn accurate video representations for action recognition.
Analysing user physiological responses for affective video summarisation. Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.
On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes Driver behavioral cues may present a rich source of information and feedback for future intelligent advanced driver-assistance systems (ADASs). With the design of a simple and robust ADAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. Eye gaze may provide a more accurate proxy than head movement for determining driver attention, whereas the measurement of head motion is less cumbersome and more reliable in harsh driving conditions. We use a lane-change intent-prediction system (McCall et al., 2007) to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane-change maneuver at a particular point in the future. Quantitative results from a naturalistic driving study are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction. The addition of eye gaze does not improve performance as much as simpler head dynamics cues. The advantage of head data over eye data is shown to be statistically significant (p
Detection of Driver Fatigue Caused by Sleep Deprivation This paper aims to provide reliable indications of driver drowsiness based on the characteristics of driver-vehicle interaction. A test bed was built under a simulated driving environment, and a total of 12 subjects participated in two experiment sessions requiring different levels of sleep (partial sleep-deprivation versus no sleep-deprivation) before the experiment. The performance of the subjects was analyzed in a series of stimulus-response and routine driving tasks, which revealed the performance differences of drivers under different sleep-deprivation levels. The experiments further demonstrated that sleep deprivation had greater effect on rule-based than on skill-based cognitive functions: when drivers were sleep-deprived, their performance of responding to unexpected disturbances degraded, while they were robust enough to continue the routine driving tasks such as lane tracking, vehicle following, and lane changing. In addition, we presented both qualitative and quantitative guidelines for designing drowsy-driver detection systems in a probabilistic framework based on the paradigm of Bayesian networks. Temporal aspects of drowsiness and individual differences of subjects were addressed in the framework.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
A CRNN module for hand pose estimation. •The input is no longer a single frame, but a sequence of several adjacent frames.•A CRNN module is proposed, which is basically the same as the standard RNN, except that it uses convolutional connection.•When the difference in the feature image of a certain layer is large, it is better to add CRNN / RNN after this layer.•Our method has the lowest error of output compared to the current state-of-the-art methods.
Deep convolutional neural network-based Bernoulli heatmap for head pose estimation Head pose estimation is a crucial problem for many tasks, such as driver attention, fatigue detection, and human behaviour analysis. It is well known that neural networks are better at handling classification problems than regression problems. It is an extremely nonlinear process to let the network output the angle value directly for optimization learning, and the weight constraint of the loss function will be relatively weak. This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image. Our method can achieve the positioning of the head area while estimating the angles of the head. The Bernoulli heatmap makes it possible to construct fully convolutional neural networks without fully connected layers and provides a new idea for the output form of head pose estimation. A deep convolutional neural network (CNN) structure with multiscale representations is adopted to maintain high-resolution information and low-resolution information in parallel. This kind of structure can maintain rich, high-resolution representations. In addition, channelwise fusion is adopted to make the fusion weights learnable instead of simple addition with equal weights. As a result, the estimation is spatially more precise and potentially more accurate. The effectiveness of the proposed method is empirically demonstrated by comparing it with other state-of-the-art methods on public datasets.
Reinforcement learning based data fusion method for multi-sensors In order to improve detection system robustness and reliability, multi-sensors fusion is used in modern air combat. In this paper, a data fusion method based on reinforcement learning is developed for multi-sensors. Initially, the cubic B-spline interpolation is used to solve time alignment problems of multisource data. Then, the reinforcement learning based data fusion (RLBDF) method is proposed to obtain the fusion results. With the case that the priori knowledge of target is obtained, the fusion accuracy reinforcement is realized by the error between fused value and actual value. Furthermore, the Fisher information is instead used as the reward if the priori knowledge is unable to be obtained. Simulations results verify that the developed method is feasible and effective for the multi-sensors data fusion in air combat.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning The literature on short-term traffic flow forecasting has undergone great development recently. Many works, describing a wide variety of different approaches, which very often share similar features and ideas, have been published. However, publications presenting new prediction algorithms usually employ different settings, data sets, and performance measurements, making it difficult to infer a clear picture of the advantages and limitations of each model. The aim of this paper is twofold. First, we review existing approaches to short-term traffic flow forecasting methods under the common view of probabilistic graphical models, presenting an extensive experimental comparison, which proposes a common baseline for their performance analysis and provides the infrastructure to operate on a publicly available data set. Second, we present two new support vector regression models, which are specifically devised to benefit from typical traffic flow seasonality and are shown to represent an interesting compromise between prediction accuracy and computational efficiency. The SARIMA model coupled with a Kalman filter is the most accurate model; however, the proposed seasonal support vector regressor turns out to be highly competitive when performing forecasts during the most congested periods.
TSCA: A Temporal-Spatial Real-Time Charging Scheduling Algorithm for On-Demand Architecture in Wireless Rechargeable Sensor Networks. The collaborative charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles (WCVs) to sensors, providing a new paradigm to prolong network lifetime. Existing techniques on collaborative charging usually take the periodical and deterministic approach, but neglect influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for large-scale WRSNs. In this paper, we develop a temporal-spatial charging scheduling algorithm, namely TSCA, for the on-demand charging architecture. We aim to minimize the number of dead nodes while maximizing energy efficiency to prolong network lifetime. First, after gathering charging requests, a WCV will compute a feasible movement solution. A basic path planning algorithm is then introduced to adjust the charging order for better efficiency. Furthermore, optimizations are made in a global level. Then, a node deletion algorithm is developed to remove low efficient charging nodes. Lastly, a node insertion algorithm is executed to avoid the death of abandoned nodes. Extensive simulations show that, compared with state-of-the-art charging scheduling algorithms, our scheme can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.
A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
Fuzzy Rough Sets And Fuzzy Rough Neural Networks For Feature Selection: A Review Feature selection aims to select a feature subset from an original feature set based on a certain evaluation criterion. Since feature selection can achieve efficient feature reduction, it has become a key method for data preprocessing in many data mining tasks. Recently, many feature selection strategies have been developed since in most cases it is infeasible to obtain an optimal/reduced feature subset by using exhaustive search. Among these strategies, fuzzy rough set theory has proved to be an ideal candidate for dealing with uncertain information. This article provides a comprehensive review on the fuzzy rough set theory and two fuzzy rough set theory based feature selection methods, that is, fuzzy rough set based feature selection methods and fuzzy rough neural network based feature selection methods. We review the publications related to the fuzzy rough theory and its applications in feature selection. In addition, the challenges in the two types of feature selection methods are also discussed.This article is categorized under:Technologies > Machine Learning
Fast learning neural networks using Cartesian genetic programming A fast learning neuroevolutionary algorithm for both feedforward and recurrent networks is proposed. The method is inspired by the well known and highly effective Cartesian genetic programming (CGP) technique. The proposed method is called the CGP-based Artificial Neural Network (CGPANN). The basic idea is to replace each computational node in CGP with an artificial neuron, thus producing an artificial neural network. The capabilities of CGPANN are tested in two diverse problem domains. Firstly, it has been tested on a standard benchmark control problem: single and double pole for both Markovian and non-Markovian cases. Results demonstrate that the method can generate effective neural architectures in substantially fewer evaluations in comparison to previously published neuroevolutionary techniques. In addition, the evolved networks show improved generalization and robustness in comparison with other techniques. Secondly, we have explored the capabilities of CGPANNs for the diagnosis of Breast Cancer from the FNA (Finite Needle Aspiration) data samples. The results demonstrate that the proposed algorithm gives 99.5% accurate results, thus making it an excellent choice for pattern recognitions in medical diagnosis, owing to its properties of fast learning and accuracy. The power of a CGP based ANN is its representation which leads to an efficient evolutionary search of suitable topologies. This opens new avenues for applying the proposed technique to other linear/non-linear and Markovian/non-Markovian control and pattern recognition problems.
Designing adaptive humanoid robots through the FARSA open-source framework We introduce FARSA, an open-source Framework for Autonomous Robotics Simulation and Analysis, that allows us to easily set up and carry on adaptive experiments involving complex robot/environmental models. Moreover, we show how a simulated iCub robot can be trained, through an evolutionary algorithm, to display reaching and integrated reaching and grasping behaviours. The results demonstrate how the use of an implicit selection criterion, estimating the extent to which the robot is able to produce the expected outcome without specifying the manner through which the action should be realized, is sufficient to develop the required capabilities despite the complexity of the robot and of the task.
A Hierarchical Fused Fuzzy Deep Neural Network for Data Classification. Deep learning (DL) is an emerging and powerful paradigm that allows large-scale task-driven feature learning from big data. However, typical DL is a fully deterministic model that sheds no light on data uncertainty reductions. In this paper, we show how to introduce the concepts of fuzzy learning into DL to overcome the shortcomings of fixed representation. The bulk of the proposed fuzzy system is...
Deep Learning With Edge Computing: A Review Deep learning is currently widely used in a variety of applications, including computer vision and natural language processing. End devices, such as smartphones and Internet-of-Things sensors, are generating data that need to be analyzed in real time using deep learning or used to train deep learning models. However, deep learning inference and training require substantial computation resources to run quickly. Edge computing, where a fine mesh of compute nodes are placed close to end devices, is a viable way to meet the high computation and low-latency requirements of deep learning on edge devices and also provides additional benefits in terms of privacy, bandwidth efficiency, and scalability. This paper aims to provide a comprehensive review of the current state of the art at the intersection of deep learning and edge computing. Specifically, it will provide an overview of applications where deep learning is used at the network edge, discuss various approaches for quickly executing deep learning inference across a combination of end devices, edge servers, and the cloud, and describe the methods for training deep learning models across multiple edge devices. It will also discuss open challenges in terms of systems performance, network technologies and management, benchmarks, and privacy. The reader will take away the following concepts from this paper: understanding scenarios where deep learning at the network edge can be useful, understanding common techniques for speeding up deep learning inference and performing distributed training on edge devices, and understanding recent trends and opportunities.
Weighted Rendezvous Planning on Q-Learning Based Adaptive Zone Partition with PSO Based Optimal Path Selection Nowadays, wireless sensor network (WSN) has emerged as the most developed research area. Different research have been demonstrated for reducing the sensor nodes’ energy consumption with mobile sink in WSN. But, such approaches were dependent on the path selected by the mobile sink since all sensed data should be gathered within the given time constraint. Therefore, in this article, the issue of an optimal path selection is solved when multiple mobile sinks are considered in WSN. In the initial stage, Q-learning based Adaptive Zone Partition method is applied to split the network into smaller zones. In each zone, the location and residual energy of nodes are transmitted to the mobile sinks through Mobile Anchor. Moreover, Weighted Rendezvous Planning is proposed to assign a weight to every node according to its hop distance. The collected data packets are transmitted to the mobile sink node within the given delay bound by means of a designated set of rendezvous points (RP). Then, an optimal path from RP to mobile sink is selected utilizing the particle swarm optimization algorithm which is applied during routing process. Experimental results demonstrated the effectiveness of the proposed approach where the network lifetime is increased by the reduction of energy consumption in multihop transmission.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
An introduction to ROC analysis Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Distributed Full Synchronized System for Global Health Monitoring Based on FLSA In modern medicine, smart wireless connected devices are gaining an increasingly important role in aiding doctors’ job of monitoring patients. More and more complex systems, with a high density of sensors capable of monitoring many biological signals, are arising. Merging the data offers a great opportunity for increasing the reliability of diagnosis. However, a huge problem is constituted by synchronization. Multi-board wireless-connected monitoring systems are a typical example of distributed systems and synchronization has always been a challenging issue. In this paper, we present a distributed full synchronized system for monitoring patients’ health capable of heartbeat rate, oxygen saturation, gait and posture analysis, and muscle activity measurements. The time synchronization is guaranteed thanks to the Fractional Low-power Synchronization Algorithm (FLSA).
Experimental and Numerical Study of Electroporation Induced by Long Monopolar and Short Bipolar Pulses on Realistic 3D Irregularly Shaped Cells In this article, the reversible electroporation induced by rectangular long unipolar and short bipolar voltage pulses on 3D cells is studied. The cell geometry was reconstructed from 3D images of real cells obtained using the confocal microscopy technique. A numerical model based on the Maxwell and the asymptotic Smoluchowski equations has been developed to calculate the induced transmembrane voltage and pore density on the plasma membrane of real cells exposed to the pulsed electric field. Moreover, in the case of the high-frequency pulses, the dielectric dispersion of plasma membranes has been taken into account using the second-order Debye-based relationship. Several numerical simulations were performed and we obtained suitable agreement between the numerical and experimental results.
Design of a surface acoustic wave resonator for sensing platforms Acoustic wave devices are an attractive technology for use in sensors since acoustic waves present high sensibilities to external parameters in terms of phase velocity and damping. This technology is very interesting in environments such as Internet of Things, where low power consumption is a central requirement. The main drawback of this technology concerns the application and the reliability of the RF signal powering the sensor that makes necessary the use of virtual network analyser or spectrum analyser. Although in literature examples of acoustic wave devices used with external antennas have been discussed, to the best of our knowledge, at the state of the art acoustic devices integrated with an antenna have not been reported yet. This paper will discuss the possibility to realize a wearable, compact remote sensor, totally passive, fully integrated with the antenna. The wearable sensor is based on a Surface Acoustic Wave (SAW) resonator designed to operate at 800 MHz and realized on a flexible and biocompatible polymeric substrate, made of Polyethylene naphthalate (PEN). The SAW resonator consists of a pair of reflecting gratings, defining the acoustic cavity, and an interdigital transducer (IDT) placed at the centre of the cavity. The distributed feedback cavity shows a high Qfactor Q ≈ 2×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">5</sup> when 200 reflectors are considered.
Clock Synchronization of Wireless Sensor Networks Clock synchronization is a critical component in the operation of wireless sensor networks (WSNs), as it provides a common time frame to different nodes. It supports functions such as fusing voice and video data from different sensor nodes, time-based channel sharing, and coordinated sleep wake-up node scheduling mechanisms. Early studies on clock synchronization for WSNs mainly focused on protocol design. However, the clock synchronization problem is inherently related to parameter estimation, and, recently, studies on clock synchronization began to emerge by adopting a statistical signal processing framework. In this article, a survey on the latest advances in the field of clock synchronization of WSNs is provided by following a signal processing viewpoint. This article illustrates that many of the proposed clock synchronization protocols can be interpreted and their performance assessed using common statistical signal processing methods. It is also shown that advanced signal processing techniques enable the derivation of optimal clock synchronization algorithms under challenging scenarios.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
A Tutorial On Visual Servo Control This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
Precomputing Oblivious Transfer Alice and Bob are too untrusting of computer scientists to let their privacy depend on unproven assumptions such as the existence of one-way functions. Firm believers in Schrödinger and Heisenberg, they might accept a quantum OT device, but IBM’s prototype is not yet portable. Instead, as part of their prenuptial agreement, they decide to visit IBM and perform some OT’s in advance, so that any later divorces, coin-flipping or other important interactions can be done more conveniently, without needing expensive third parties. Unfortunately, OT can’t be done in advance in a direct way, because even though Bob might not know what bit Alice will later send (even if she first sends a random bit and later corrects it, for example), he would already know which bit or bits he will receive. We address the problem of precomputing oblivious transfer and show that OT can be precomputed at a cost of Θ(κ) prior transfers (a tight bound). In contrast, we show that variants of OT, such as one-out-of-two OT, can be precomputed using only one prior transfer. Finally, we show that all variants can be reduced to a single precomputed one-out-of-two oblivious transfer.
Paraphrasing for automatic evaluation This paper studies the impact of paraphrases on the accuracy of automatic evaluation. Given a reference sentence and a machine-generated sentence, we seek to find a paraphrase of the reference sentence that is closer in wording to the machine output than the original reference. We apply our paraphrasing method in the context of machine translation evaluation. Our experiments show that the use of a paraphrased synthetic reference refines the accuracy of automatic evaluation. We also found a strong connection between the quality of automatic paraphrases as judged by humans and their contribution to automatic evaluation.
High delivery rate position-based routing algorithms for 3D ad hoc networks Position-based routing algorithms use the geographic position of the nodes in a network to make the forwarding decisions. Recent research in this field primarily addresses such routing algorithms in two dimensional (2D) space. However, in real applications, nodes may be distributed in three dimensional (3D) environments. In this paper, we propose several randomized position-based routing algorithms and their combination with restricted directional flooding-based algorithms for routing in 3D environments. The first group of algorithms AB3D are extensions of previous randomized routing algorithms from 2D space to 3D space. The second group ABLAR chooses m neighbors according to a space-partition heuristic and forwards the message to all these nodes. The third group T-ABLAR-T uses progress-based routing until a local minimum is reached. The algorithm then switches to ABLAR for one step after which the algorithm switches back to the progress-based algorithm again. The fourth group AB3D-ABLAR uses an algorithm from the AB3D group until a threshold is passed in terms of number of hops. The algorithm then switches to an ABLAR algorithm. The algorithms are evaluated and compared with current routing algorithms. The simulation results on unit disk graphs (UDG) show a significant improvement in delivery rate (up to 99%) and a large reduction of the traffic.
Achievable Rates of Full-Duplex MIMO Radios in Fast Fading Channels With Imperfect Channel Estimation We study the theoretical performance of two full-duplex multiple-input multiple-output (MIMO) radio systems: a full-duplex bi-directional communication system and a full-duplex relay system. We focus on the effect of a (digitally manageable) residual self-interference due to imperfect channel estimation (with independent and identically distributed (i.i.d.) Gaussian channel estimation error) and transmitter noise. We assume that the instantaneous channel state information (CSI) is not available the transmitters. To maximize the system ergodic mutual information, which is a nonconvex function of power allocation vectors at the nodes, a gradient projection algorithm is developed to optimize the power allocation vectors. This algorithm exploits both spatial and temporal freedoms of the source covariance matrices of the MIMO links between transmitters and receivers to achieve higher sum ergodic mutual information. It is observed through simulations that the full-duplex mode is optimal when the nominal self-interference is low, and the half-duplex mode is optimal when the nominal self-interference is high. In addition to an exact closed-form ergodic mutual information expression, we introduce a much simpler asymptotic closed-form ergodic mutual information expression, which in turn simplifies the computation of the power allocation vectors.
Surrogate-assisted hierarchical particle swarm optimization. Meta-heuristic algorithms, which require a large number of fitness evaluations before locating the global optimum, are often prevented from being applied to computationally expensive real-world problems where one fitness evaluation may take from minutes to hours, or even days. Although many surrogate-assisted meta-heuristic optimization algorithms have been proposed, most of them were developed for solving expensive problems up to 30 dimensions. In this paper, we propose a surrogate-assisted hierarchical particle swarm optimizer for high-dimensional problems consisting of a standard particle swarm optimization (PSO) algorithm and a social learning particle swarm optimization algorithm (SL-PSO), where the PSO and SL-PSO work together to explore and exploit the search space, and simultaneously enhance the global and local performance of the surrogate model. Our experimental results on seven benchmark functions of dimensions 30, 50 and 100 demonstrate that the proposed method is competitive compared with the state-of-the-art algorithms under a limited computational budget.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
0
State-of-the-Art Clustering Schemes in Mobile Ad Hoc Networks: Objectives, Challenges, and Future Directions. Mobile ad hoc networks (MANETs) are self-organized networks without any fixed infrastructure. The topology changes are very frequent in MANETs due to nodes' mobility. The topology maintenance creates an extra overhead, as the mobility information of a single node is shared with all nodes in the network. To address the topology maintenance overhead problem in MANETs, the researchers proposed different cluster-based algorithms to reduce the size of a routing table. The clusters are formed to locally adjust the topology changes within the cluster. If a node wants to communicate with a node outside the cluster, it only communicates with its cluster head (CH). The CH communicates with other CHs to transmit data toward the destination. To efficiently utilize the clustering mechanism in MANETs, stable and balanced clusters are required. To form good quality and optimized clusters, some metrics, such as relative mobility (node speed and direction), node degree, residual energy, communication workload, and neighbor's behavior, are required. In this paper, we present a comprehensive survey of recent CAs in MANETs. We also present the objectives, goals, and contributions of recent research. Similarly, the findings, challenges, and future directions are stated. The validation of each proposed work is analyzed critically in terms of the mobility model, the simulation tool used during simulation, simulation metrics, and the performance metrics used in the validation process.
Multiple QoS Parameters-Based Routing for Civil Aeronautical Ad Hoc Networks. Aeronautical ad hoc network (AANET) can be applied as in-flight communication systems to allow aircraft to communicate with the ground, in complement to other existing communication systems to support Internet of Things. However, the unique features of civil AANETs present a great challenge to provide efficient and reliable data delivery in such environments. In this paper, we propose a multiple q...
Performance Improvement of Cluster-Based Routing Protocol in VANET. Vehicular ad-hoc NETworks (VANETs) have received considerable attention in recent years, due to its unique characteristics, which are different from mobile ad-hoc NETworks, such as rapid topology change, frequent link failure, and high vehicle mobility. The main drawback of VANETs network is the network instability, which yields to reduce the network effciency. In this paper, we propose three algorithms: cluster-based life-time routing (CBLTR) protocol, Intersection dynamic VANET routing (IDVR) protocol, and control overhead reduction algorithm (CORA). The CBLTR protocol aims to increase the route stability and average throughput in a bidirectional segment scenario. The cluster heads (CHs) are selected based on maximum lifetime among all vehicles that are located within each cluster. The IDVR protocol aims to increase the route stability and average throughput, and to reduce end-to-end delay in a grid topology. The elected intersection CH receives a set of candidate shortest routes (SCSR) closed to the desired destination from the software defined network. The IDVR protocol selects the optimal route based on its current location, destination location, and the maximum of the minimum average throughput of SCSR. Finally, the CORA algorithm aims to reduce the control overhead messages in the clusters by developing a new mechanism to calculate the optimal numbers of the control overhead messages between the cluster members and the CH. We used SUMO traffic generator simulators and MATLAB to evaluate the performance of our proposed protocols. These protocols significantly outperform many protocols mentioned in the literature, in terms of many parameters.
SCOTRES: Secure Routing for IoT and CPS. Wireless ad-hoc networks are becoming popular due to the emergence of the Internet of Things and cyber-physical systems (CPSs). Due to the open wireless medium, secure routing functionality becomes important. However, the current solutions focus on a constrain set of network vulnerabilities and do not provide protection against newer attacks. In this paper, we propose SCOTRES-a trust-based system ...
AQ-Routing: mobility-, stability-aware adaptive routing protocol for data routing in MANET–IoT systems Internet of Things, is an innovative technology which allows the connection of physical things with the digital world through the use of heterogeneous networks and communication technologies. In an IoT system, a major role is played by the wireless sensor network as its components comprise: sensing, data acquiring, heterogeneous connectivity and data processing. Mobile ad-hoc networks are highly self reconfiguring networks of mobile nodes which communicate through wireless links. In such a network, each node acts both as a router and host at the same time. The interaction between MANETs and Internet of Things opens new ways for service provision in smart environments and challenging issues in its networking aspects. One of the main issues in MANET–IoT systems is the mobility of the network nodes: routing protocol must react effectively to the topological changes into the algorithm design. We describe the design and implementation of AQ-Routing, and analyze its performance using both simulations and measurements based on our implementation. In general, the networking of such a system is very challenging regarding routing aspects. Also, it is related to system mobility and limited network sensor resources. This article builds upon this observation an adaptive routing protocol (AQ-Routing) based on Reinforcement Learning (RL) techniques, which has the ability to detect the level of mobility at different points of time so that each individual node can update routing metric accordingly. The proposed protocol introduces: (i) new model, developed via Q-learning technique, to detect the level of mobility at each node in the network; (ii) a new metric, called $$Q_{\textit{metric}},$$ which account for the static and dynamic routing metrics, and which are combined and updated to the changing network topologies. The protocol can efficiently handle network mobility by a way of preemptively adapting its behaviour thanks to the mobility detection model. The presented results of simulation provide an effective approach to improve the stability of links in both static and mobile scenario and, hence, increase the packet delivery ratio in the global MANET–IoT system.
A Network Lifetime Extension-Aware Cooperative MAC Protocol for MANETs With Optimized Power Control. In this paper, a cooperative medium access control (CMAC) protocol, termed network lifetime extension-aware CMAC (LEA-CMAC) for mobile ad-hoc networks (MANETs) is proposed. The main feature of the LEA-CMAC protocol is to enhance the network performance through the cooperative transmission to achieve a multi-objective target orientation. The unpredictable nature of wireless communication links results in the degradation of network performance in terms of throughput, end-to-end delay, energy efficiency, and network lifetime of MANETs. Through cooperative transmission, the network performance of MANETs can be improved, provided a beneficial cooperation is satisfied and design parameters are carefully selected at the MAC layer. To achieve a multi-objective target-oriented CMAC protocol, we formulated an optimization problem to extend the network lifetime of MANETs. The optimization solution led to the investigation of symmetric and asymmetric transmit power policies. We then proposed a distributed relay selection process to select the best retransmitting node among the qualified relays, with consideration on a transmit power, a sufficient residual energy after cooperation, and a high cooperative gain. The simulation results show that the LEA-CMAC protocol can achieve a multi-objective target orientation by exploiting an asymmetric transmit power policy to improve the network performance.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
A communication robot in a shopping mall This paper reports our development of a communication robot for use in a shopping mall to provide shopping information, offer route guidance, and build rapport. In the development, the major difficulties included sensing human behaviors, conversation in a noisy daily environment, and the needs of unexpected miscellaneous knowledge in the conversation. We chose a networkrobot system approach, where a single robot's poor sensing capability and knowledge are supplemented by ubiquitous sensors and a human operator. The developed robot system detects a person with floor sensors to initiate interaction, identifies individuals with radio-frequency identification (RFID) tags, gives shopping information while chatting, and provides route guidance with deictic gestures. The robotwas partially teleoperated to avoid the difficulty of speech recognition as well as to furnish a new kind of knowledge that only humans can flexibly provide. The information supplied by a human operator was later used to increase the robot's autonomy. For 25 days in a shopping mall, we conducted a field trial and gathered 2642 interactions. A total of 235 participants signed up to use RFID tags and, later, provided questionnaire responses. The questionnaire results are promising in terms of the visitors' perceived acceptability as well as the encouragement of their shopping activities. The results of the teleoperation analysis revealed that the amount of teleoperation gradually decreased, which is also promising.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Market Mechanism Design for Profitable On-Demand Transport Services. •A new class of on-demand transport services is investigated.•New agent-based models are introduced for passengers and the service provider.•We propose and analyze a market mechanism to jointly schedule, route, and price passengers.•The profit and efficiency of our mechanism are compared.•We demonstrate our mechanism can outperform standard fixed price-rate approaches.
A Two-Layer Model for Taxi Customer Searching Behaviors Using GPS Trajectory Data. This paper proposes a two-layer decision framework to model taxi drivers' customer-search behaviors within urban areas. The first layer models taxi drivers' pickup location choice decisions, and a Huff model is used to describe the attractiveness of pickup locations. Then, a path size logit (PSL) model is used in the second layer to analyze route choice behaviors considering information such as path size, path distance, travel time, and intersection delay. Global Positioning System data are collected from more than 36 000 taxis in Beijing, China, at the interval of 30 s during six months. The Xidan district with a large shopping center is selected to validate the proposed model. Path travel time is estimated based on probe taxi vehicles on the network. The validation results show that the proposed Huff model achieved high accuracy to estimate drivers' pickup location choices. The PSL outperforms traditional multinomial logit in modeling drivers' route choice behaviors. The findings of this paper can help understand taxi drivers' customer searching decisions and provide strategies to improve the system services.
Modeling taxi driver anticipatory behavior. As part of a wider behavioral agent-based model that simulates taxi drivers' dynamic passenger-finding behavior under uncertainty, we present a model of strategic behavior of taxi drivers in anticipation of substantial time varying demand at locations such as airports and major train stations. The model assumes that, considering a particular decision horizon, a taxi driver decides to transfer to such a destination based on a reward function. The dynamic uncertainty of demand is captured by a time dependent pick-up probability, which is a cumulative distribution function of waiting time. The model allows for information learning by which taxi drivers update their beliefs from past experiences. A simulation on a real road network, applied to test the model, indicates that the formulated model dynamically improves passenger-finding strategies at the airport. Taxi drivers learn when to transfer to the airport in anticipation of the time-varying demand at the airport to minimize their waiting time.
Uncovering Distribution Patterns of High Performance Taxis from Big Trace Data. The unbalanced distribution of taxi passengers in space and time affects taxi driver performance. Existing research has studied taxi driver performance by analyzing taxi driver strategies when the taxi is occupied. However, searching for passengers when vacant is costly for drivers, and it limits operational efficiency and income. Few researchers have taken the costs during vacant status into consideration when evaluating taxi driver performance. In this paper, we quantify taxi driver performance using the taxi's average efficiency. We propose the concept of a high-efficiency single taxi trip and then develop a quantification and evaluation model for taxi driver performance based on single trip efficiency. In a case study, we first divide taxi drivers into top drivers and ordinary drivers, according to their performance as calculated from their GPS traces over a week, and analyze the space-time distribution and operating patterns of the top drivers. Then, we compare the space-time distribution of top drivers to ordinary drivers. The results show that top drivers usually operate far away from downtown areas, and the distribution of top driver operations is highly correlated with traffic conditions. We compare the proposed performance-based method with three other approaches to taxi operation evaluation. The results demonstrate the accuracy and feasibility of the proposed method in evaluating taxi driver performance and ranking taxi drivers. This paper could provide empirical insights for improving taxi driver performance.
Hunting or waiting? Discovering passenger-finding strategies from a large-scale real-world taxi dataset In modern cities, more and more vehicles, such as taxis, have been equipped with GPS devices for localization and navigation. Gathering and analyzing these large-scale real-world digital traces have provided us an unprecedented opportunity to understand the city dynamics and reveal the hidden social and economic “realities”. One innovative pervasive application is to provide correct driving strategies to taxi drivers according to time and location. In this paper, we aim to discover both efficient and inefficient passenger-finding strategies from a large-scale taxi GPS dataset, which was collected from 5350 taxis for one year in a large city of China. By representing the passenger-finding strategies in a Time-Location-Strategy feature triplet and constructing a train/test dataset containing both top- and ordinary-performance taxi features, we adopt a powerful feature selection tool, L1-Norm SVM, to select the most salient feature patterns determining the taxi performance. We find that the selected patterns can well interpret the empirical study results derived from raw data analysis and even reveal interesting hidden “facts”. Moreover, the taxi performance predictor built on the selected features can achieve a prediction accuracy of 85.3% on a new test dataset, and it also outperforms the one based on all the features, which implies that the selected features are indeed the right indicators of the passenger-finding strategies.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Efficient Signature Generation by Smart Cards We present a new public-key signature scheme and a corresponding authentication scheme that are based on discrete logarithms in a subgroup of units in Zp where p is a sufficiently large prime, e.g., p = 2512. A key idea is to use for the base of the discrete logarithm an integer a in Zp such that the order of a is a sufficiently large prime q, e.g., q = 2140. In this way we improve the ElGamal signature scheme in the speed of the procedures for the generation and the verification of signatures and also in the bit length of signatures. We present an efficient algorithm that preprocesses the exponentiation of a random residue modulo p.
Stabilizing a linear system by switching control with dwell time The use of networks in control systems to connect controllers and sensors/actuators has become common practice in many applications. This new technology has also posed a theoretical control problem of how to use the limited data rate of the network effectively. We consider a system where its sensor and actuator are connected by a finite data rate channel. A design method to stabilize a continuous-time, linear plant using a switching controller is proposed. In particular, to prevent the actuator from fast switching, or chattering, which can not only increase the necessary data rate but also damage the system, we employ a dwell-time switching scheme. It is shown that a systematic partition of the state-space enables us to reduce the complexity of the design problem
Empirical Modelling of Genetic Algorithms This paper addresses the problem of reliably setting genetic algorithm parameters for consistent labelling problems. Genetic algorithm parameters are notoriously difficult to determine. This paper proposes a robust empirical framework, based on the analysis of factorial experiments. The use of a graeco-latin square permits an initial study of a wide range of parameter settings. This is followed by fully crossed factorial experiments with narrower ranges, which allow detailed analysis by logistic regression. The empirical models derived can be used to determine optimal algorithm parameters and to shed light on interactions between the parameters and their relative importance. Re-fined models are produced, which are shown to be robust under extrapolation to up to triple the problem size.
Adaptive Consensus Control for a Class of Nonlinear Multiagent Time-Delay Systems Using Neural Networks Because of the complicity of consensus control of nonlinear multiagent systems in state time-delay, most of previous works focused only on linear systems with input time-delay. An adaptive neural network (NN) consensus control method for a class of nonlinear multiagent systems with state time-delay is proposed in this paper. The approximation property of radial basis function neural networks (RBFNNs) is used to neutralize the uncertain nonlinear dynamics in agents. An appropriate Lyapunov-Krasovskii functional, which is obtained from the derivative of an appropriate Lyapunov function, is used to compensate the uncertainties of unknown time delays. It is proved that our proposed approach guarantees the convergence on the basis of Lyapunov stability theory. The simulation results of a nonlinear multiagent time-delay system and a multiple collaborative manipulators system show the effectiveness of the proposed consensus control algorithm.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
0
0
0
Wireless Sensor Network Design Methodologies: A Survey Wireless sensor networks (WSNs) have grown considerably in recent years and have a significant potential in different applications including health, environment, and military. Despite their powerful capabilities, the successful development of WSN is still a challenging task. In current real-world WSN deployments, several programming approaches have been proposed, which focus on low-level system issues. In order to simplify the design of the WSN and abstract from technical low-level details, high-level approaches have been recognized and several solutions have been proposed. In particular, the model-driven engineering (MDE) approach is becoming a promising solution. In this paper, we present a survey of existing programming methodologies and model-based approaches for the development of sensor networks. We recall and classify existing related WSN development approaches. The main objective of our research is to investigate the feasibility and the application of high-level-based approaches to ease WSN design. We concentrate on a set of criteria to highlight the shortcomings of the relevant approaches. Finally, we present our future directions to cope with the limits of existing solutions.
IoT Elements, Layered Architectures and Security Issues: A Comprehensive Survey. The use of the Internet is growing in this day and age, so another area has developed to use the Internet, called Internet of Things (IoT). It facilitates the machines and objects to communicate, compute and coordinate with each other. It is an enabler for the intelligence affixed to several essential features of the modern world, such as homes, hospitals, buildings, transports and cities. The security and privacy are some of the critical issues related to the wide application of IoT. Therefore, these issues prevent the wide adoption of the IoT. In this paper, we are presenting an overview about different layered architectures of IoT and attacks regarding security from the perspective of layers. In addition, a review of mechanisms that provide solutions to these issues is presented with their limitations. Furthermore, we have suggested a new secure layered architecture of IoT to overcome these issues.
A Multicharger Cooperative Energy Provision Algorithm Based On Density Clustering In The Industrial Internet Of Things Wireless sensor networks (WSNs) are an important core of the Industrial Internet of Things (IIoT). Wireless rechargeable sensor networks (WRSNs) are sensor networks that are charged by mobile chargers (MCs), and can achieve self-sufficiency. Therefore, the development of WRSNs has begun to attract widespread attention in recent years. Most of the existing energy replenishment algorithms for MCs use one or more MCs to serve the whole network in WRSNs. However, a single MC is not suitable for large-scale network environments, and multiple MCs make the network cost too high. Thus, this paper proposes a collaborative charging algorithm based on network density clustering (CCA-NDC) in WRSNs. This algorithm uses the mean-shift algorithm based on density to cluster, and then the mother wireless charger vehicle (MWCV) carries multiple sub wireless charger vehicles (SWCVs) to charge the nodes in each cluster by using a gradient descent optimization algorithm. The experimental results confirm that the proposed algorithm can effectively replenish the energy of the network and make the network more stable.
Efficient Wireless Charging Pad Deployment in Wireless Rechargeable Sensor Networks. The rapid development of wireless power transfer technology brings forth innovative vehicle energy solutions and breakthroughs utilizing <italic>wireless sensor networks</italic> (WSNs). In most existing schemes, <italic>wireless rechargeable sensor networks</italic> (WRSNs) are generally equipped with one or more <italic>wireless charging vehicles</italic> (vehicles) to serve <italic>sensor nodes</italic> (SNs). These schemes solve the energy issue to some extent; however, due to off-road and speed limitations of vehicles, some SNs still cannot be charged in time, negatively affecting the lifetime of the networks. Our work proposes a new WRSN model equipped with one <italic>wireless charging drone</italic> (drone) with a constrained flight distance coupled with several <italic>wireless charging pads</italic> (pads) deployed to charge the drone when it cannot reach the subsequent stop. Our model solves this charging issues effectively and overcomes energy capacity limitations of the drone. Thus, a wireless charging pad deployment problem is formulated, which aims to apply the minimum number of pads so that at least one feasible routing path can be established for the drone to reach every SN in a given WRSN. Four feasible heuristics, three based on graph theory and one on geometry, are proposed for this problem. In addition, a novel drone scheduling algorithm, the shortest multi-hop path algorithm, is developed for the drone to serve charging requests with the assistance of pads. We examine the proposed schemes through extensive simulations. The results compare and demonstrate the effectiveness of the proposed schemes in terms of network density, region size and maximum flight distance.
The Challenges of IoT Addressing Security, Ethics, Privacy, and Laws •The greatest threats of IoT arise in the domains of security, privacy, ethics, and laws•The majority of the research on IoT vulnerabilities is focused only on security while neglecting the equally crucial ethical and privacy aspects•The common user must be made aware of the security, ethical, and privacy threats imposed by modern IoT devices•Only a handful of nations have implemented IoT-specific laws•Statement of Intent Regarding the Security of the Internet of Things is currently the only legal document pertaining to IoT on a truly international level apart from international standards
Multi-Mc Charging Schedule Algorithm With Time Windows In Wireless Rechargeable Sensor Networks The limited lifespan of the traditional Wireless Sensor Networks (WSNs) has always restricted the broad application and development of WSNs. The current studies have shown that the wireless power transmission technology can effectively prolong the lifetime of WSNs. In most present studies on charging schedules, the sensor nodes will be charged once they have energy consumption, which will cause higher cost and lower networks utility. It is assumed in this paper that the sensor nodes in Wireless Rechargeable Sensor Networks (WRSNs) will be charged only after its energy is lower than a certain value. Each node has a charging time window and is charged within its respective time window. In large-scale wireless sensor networks, single mobile charger (MC) is difficult to ensure that all sensor nodes work properly. Therefore, it is propoesd in this paper that the multiple MCs which are used to replenish energy for the sensor nodes. When the average energy of all the sensor nodes falls below the upper energy threshold, each MC begins to charge the sensor nodes. The genetic algorithm has a great advantage in solving optimization problems. However, it could easily lead to inadequate search. Therefore, the genetic algorithm is improved by 2-opt strategy. And then multi-MC charging schedule algorithm with time windows based on genetic algorithm is proposed and simulated. The simulation results show that the algorithm designed in this paper can timely replenish energy for each sensor node and minimize the total charging cost.
Evaluating the On-Demand Mobile Charging in Wireless Sensor Networks Recently, adopting mobile energy chargers to replenish the energy supply of sensor nodes in wireless sensor networks has gained increasing attention from the research community. Different from energy harvesting systems, the utilization of mobile energy chargers is able to provide more reliable energy supply than the dynamic energy harvested from the surrounding environment. While pioneering works on the mobile recharging problem mainly focus on the optimal offline path planning for the mobile chargers, in this work, we aim to lay the theoretical foundation for the on-demand mobile charging problem, where individual sensor nodes request charging from the mobile charger when their energy runs low. Specifically, in this work we analyze the On-Demand Mobile Charging (DMC) problem using a simple but efficient Nearest-Job-Next with Preemption (NJNP) discipline for the mobile charger, and provide analytical results on the system throughput and charging latency from the perspectives of the mobile charger and individual sensor nodes, respectively. To demonstrate how the actual system design can benefit from our analytical results, we present two examples on determining the essential system parameters such as the optimal remaining energy level for individual sensor nodes to send out their recharging requests and the minimal energy capacity required for the mobile charger. Through extensive simulation with real-world system settings, we verify that our analytical results match the simulation results well and the system designs based on our analysis are effective.
Recognizing daily activities with RFID-based sensors We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readers detect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with precision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring.
Digital games in the classroom? A contextual approach to teachers' adoption intention of digital games in formal education Interest in using digital games for formal education has steadily increased in the past decades. When it comes to actual use, however, the uptake of games in the classroom remains limited. Using a contextual approach, the possible influence of factors on a school (N=60) and teacher (N=409) level are analyzed. Findings indicate that there is no effect of factors on the school level whereas on a teacher level, a model is tested, explaining 68% of the variance in behavioral intention, in which curriculum-relatedness and previous experience function as crucial determinants of the adoption intention. These findings add to previous research on adoption determinants related to digital games in formal education. Furthermore, they provide insight into the relations between different adoption determinants and their association with behavioral intention.
Are we ready for autonomous driving? The KITTI vision benchmark suite Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.
Piecewise linear mapping functions for image registration A new approach to determination of mapping functions for registration of digital images is presented. Given the coordinates of corresponding control points in two images of the same scene, first the images are divided into triangular regions by triangulating the control points. Then a linear mapping function is obtained by registering each pair of corresponding triangular regions in the images. The overall mapping function is then obtained by piecing together the linear mapping functions.
RECIFE-MILP: An Effective MILP-Based Heuristic for the Real-Time Railway Traffic Management Problem The real-time railway traffic management problem consists of selecting appropriate train routes and schedules for minimizing the propagation of delay in case of traffic perturbation. In this paper, we tackle this problem by introducing RECIFE-MILP, a heuristic algorithm based on a mixed-integer linear programming model. RECIFE-MILP uses a model that extends one we previously proposed by including additional elements characterizing railway reality. In addition, it implements performance boosting methods selected among several ones through an algorithm configuration tool. We present a thorough experimental analysis that shows that the performances of RECIFE-MILP are better than the ones of the currently implemented traffic management strategy. RECIFE-MILP often finds the optimal solution to instances within the short computation time available in real-time applications. Moreover, RECIFE-MILP is robust to its configuration if an appropriate selection of the combination of boosting methods is performed.
Flymap: Interacting With Maps Projected From A Drone Interactive maps have become ubiquitous in our daily lives, helping us reach destinations and discovering our surroundings. Yet, designing map interactions is not straightforward and depends on the device being used. As mobile devices evolve and become independent from users, such as with robots and drones, how will we interact with the maps they provide? We propose FlyMap as a novel user experience for drone-based interactive maps. We designed and developed three interaction techniques for FlyMap's usage scenarios. In a comprehensive indoor study (N = 16), we show the strengths and weaknesses of two techniques on users' cognition, task load, and satisfaction. FlyMap was then pilot tested with the third technique outdoors in real world conditions with four groups of participants (N = 13). We show that FlyMap's interactivity is exciting to users and opens the space for more direct interactions with drones.
Design and Validation of a Cable-Driven Asymmetric Back Exosuit Lumbar spine injuries caused by repetitive lifting rank as the most prevalent workplace injury in the United States. While these injuries are caused by both symmetric and asymmetric lifting, asymmetric is often more damaging. Many back devices do not address asymmetry, so we present a new system called the Asymmetric Back Exosuit (ABX). The ABX addresses this important gap through unique design geometry and active cable-driven actuation. The suit allows the user to move in a wide range of lumbar trajectories while the “X” pattern cable routing allows variable assistance application for these trajectories. We also conducted a biomechanical analysis in OpenSim to map assistive cable force to effective lumbar torque assistance for a given trajectory, allowing for intuitive controller design in the lumbar joint space over the complex kinematic chain for varying lifting techniques. Human subject experiments illustrated that the ABX reduced lumbar erector spinae muscle activation during symmetric and asymmetric lifting by an average of 37.8% and 16.0%, respectively, compared to lifting without the exosuit. This result indicates the potential for our device to reduce lumbar injury risk.
1.24
0.24
0.24
0.24
0.24
0.12
0.01386
0.00025
0
0
0
0
0
0
A Survey of UAS Technologies for Command, Control, and Communication (C3) The integration of unmanned aircraft systems (UAS) into the National Airspace System (NAS) presents many challenges including airworthiness certification. As an alternative to the time consuming process of modifying the Federal Aviation Regulations (FARs), guidance materials may be generated that apply existing airworthiness regulations toward UAS. This paper discusses research to assist in the development of such guidance material. The results of a technology survey of command, control, and communication (C3) technologies for UAS are presented. Technologies supporting both line-of-sight and beyond line-of-sight UAS operations are examined. For each, data link technologies, flight control, and air traffic control (ATC) coordination are considered. Existing protocols and standards for UAS and aircraft communication technologies are discussed. Finally, future work toward developing the guidance material is discussed.
Positioning of UAVs for throughput maximization in software-defined disaster area UAV communication networks. The throughput of a communication system depends on the data traffic load and the available capacity to support that load. In an unmanned aerial vehicle (UAV)-based communication system, the UAV position is one of the major factor affecting the capacity available to the flows (data sessions) being served. This paper proposes a centralized algorithm for positioning UAVs to maximize the throughput o...
Micro aerial vehicle networks: an experimental analysis of challenges and opportunities The need for aerial networks is growing with the recent advance of micro aerial vehicles, which enable a wide range of civilian applications. Our experimental analysis shows that wireless connectivity among MAVs is challenged by the mobility and heterogeneity of the nodes, lightweight antenna design, body blockage, constrained embedded resources, and limited battery power. However, the movement and location of MAVs are known and may be controlled to establish wireless links with the best transmission opportunities in time and space. This special ecosystem undoubtedly requires a rethinking of wireless communications and calls for novel networking approaches. Supported by empirical results, we identify important research questions, and introduce potential solutions and directions for investigation.
The Internet of Things: A survey This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details.
Joint Optimization of UAV 3D Placement and Path Loss Factor for Energy Efficient Maximal Coverage Unmanned aerial vehicle (UAV) is a key enabler for communication systems beyond the fifth generation due to its applications in almost every field, including mobile communications and vertical industries. However, there exist many challenges in 3-D UAV placement, such as resource and power allocation, trajectory optimization, and user association. This problem becomes even more complex as UAV chan...
Unmanned Aerial Vehicle-Aided Communications: Joint Transmit Power and Trajectory Optimization. This letter investigates the transmit power and trajectory optimization problem for unmanned aerial vehicle (UAV)-aided networks. Different from majority of the existing studies with fixed communication infrastructure, a dynamic scenario is considered where a flying UAV provides wireless services for multiple ground nodes simultaneously. To fully exploit the controllable channel variations provide...
Mobile Unmanned Aerial Vehicles (UAVs) for Energy-Efficient Internet of Things Communications. In this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (UAVs), used as aerial base stations to collect data from ground Internet of Things (IoT) devices, are investigated. In particular, to enable reliable uplink communications for the IoT devices with a minimum total transmit power, a novel framework is proposed for jointly optimizing the 3D placement and the mo...
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Response time in man-computer conversational transactions The literature concerning man-computer transactions abounds in controversy about the limits of "system response time" to a user's command or inquiry at a terminal. Two major semantic issues prohibit resolving this controversy. One issue centers around the question of "Response time to what?" The implication is that different human purposes and actions will have different acceptable or useful response times.
Ripple effects of an embedded social agent: a field study of a social robot in the workplace Prior research has investigated the effect of interactive social agents presented on computer screens or embodied in robots. Much of this research has been pursued in labs and brief field studies. Comparatively little is known about social agents embedded in the workplace, where employees have repeated interactions with the agent, alone and with others. We designed a social robot snack delivery service for a workplace, and evaluated the service over four months allowing each employee to use it for two months. We report on how employees responded to the robot and the service over repeated encounters. Employees attached different social roles to the robot beyond a delivery person as they incorporated the robot's visit into their workplace routines. Beyond one-on-one interaction, the robot created a ripple effect in the workplace, triggering new behaviors among employees, including politeness, protection of the robot, mimicry, social comparison, and even jealousy. We discuss the implications of these ripple effects for designing services incorporating social agents.
An Improved RSA Based User Authentication and Session Key Agreement Protocol Usable in TMIS. Recently, Giri et al.'s proposed a RSA cryptosystem based remote user authentication scheme for telecare medical information system and claimed that the protocol is secure against all the relevant security attacks. However, we have scrutinized the Giri et al.'s protocol and pointed out that the protocol is not secure against off-line password guessing attack, privileged insider attack and also suffers from anonymity problem. Moreover, the extension of password guessing attack leads to more security weaknesses. Therefore, this protocol needs improvement in terms of security before implementing in real-life application. To fix the mentioned security pitfalls, this paper proposes an improved scheme over Giri et al.'s scheme, which preserves user anonymity property. We have then simulated the proposed protocol using widely-accepted AVISPA tool which ensures that the protocol is SAFE under OFMC and CL-AtSe models, that means the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. The informal cryptanalysis has been also presented, which confirmed that the proposed protocol provides well security protection on the relevant security attacks. The performance analysis section compares the proposed protocol with other existing protocols in terms of security and it has been observed that the protocol provides more security and achieves additional functionalities such as user anonymity and session key verification.
Finite-Time Adaptive Fuzzy Tracking Control Design for Nonlinear Systems. This paper addresses the finite-time tracking problem of nonlinear pure-feedback systems. Unlike the literature on traditional finite-time stabilization, in this paper the nonlinear system functions, including the bounding functions, are all totally unknown. Fuzzy logic systems are used to model those unknown functions. To present a finite-time control strategy, a criterion of semiglobal practical...
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.06899
0.069136
0.06899
0.066667
0.066667
0.034349
0.016045
0
0
0
0
0
0
0
A Decentralized Electricity Trading Framework (DETF) for Connected EVs: a Blockchain and Machine Learning for Profit Margin Optimization Connected electric vehicles (CEVs) can help cities to reduce road congestion and increase road safety. With the technical improvement made to the battery system in terms of capacity and flexibility, CEVs, as mobile power plants can be an important actor for the electricity markets. Especially, they can trade electricity between each other when supply stations are full or temporarily not available....
Stochastic Optimal Operation of Microgrid Based on Chaotic Binary Particle Swarm Optimization Based on fuzzy mathematics theory, this paper proposes a fuzzy multi-objective optimization model with related constraints to minimize the total economic cost and network loss of microgrid. Uncontrollable microsources are considered as negative load, and stochastic net load scenarios are generated for taking the uncertainty of their output power and load into account. Cooperating with storage devices of the optimal capacity controllable microsources are treated as variables in the optimization process with the consideration of their start and stop strategy. Chaos optimization algorithm is introduced into binary particle swarm optimization (BPSO) to propose chaotic BPSO (CBPSO). Search capability of BPSO is improved via the chaotic search approach of chaos optimization algorithm. Tests of four benchmark functions show that the proposed CBPSO has better convergence performance than BPSO. Simulation results validate the correctness of the proposed model and the effectiveness of CBPSO.
A Technical Approach to the Energy Blockchain in Microgrids. The present paper considers some technical issues related to the “energy blockchain” paradigm applied to microgrids. In particular, what appears from the study is that the superposition of energy transactions in a microgrid creates a variation of the power losses in all the branches of the microgrid. Traditional power losses allocation in distribution systems takes into account only generators whi...
Energy-Aware Green Adversary Model for Cyberphysical Security in Industrial System Adversary models have been fundamental to the various cryptographic protocols and methods. However, their use in most of the branches of research in computer science is comparatively restricted, primarily in case of the research in cyberphysical security (e.g., vulnerability studies, position confidentiality). In this article, we propose an energy-aware green adversary model for its use in smart industrial environment through achieving confidentiality. Even though, mutually the hardware and the software parts of cyberphysical systems can be improved to decrease its energy consumption, this article focuses on aspects of conserving position and information confidentiality. On the basis of our findings (assumptions, adversary goals, and capabilities) from the literature, we give some testimonials to help practitioners and researchers working in cyberphysical security. The proposed model that runs on real-time anticipatory position-based query scheduling in order to minimize the communication and computation cost for each query, thus, facilitating energy consumption minimization. Moreover, we calculate the transferring/acceptance slots required for each query to avoid deteriorating slots. The experimental results confirm that the proposed approach can diminish energy consumption up to five times in comparison to existing approaches
Energy-Efficient and Trustworthy Data Collection Protocol Based on Mobile Fog Computing in Internet of Things The tremendous growth of interconnected things/devices in the whole world advances to the new paradigm, i.e., Internet of Things (IoT). The IoT use sensor-based embedded systems to interact with others, providing a wide range of applications and services to upper-level users. Undoubtedly, the data collected by the underlying IoTs are the basis of the upper-layer decision and the foundation for all the applications, which requires efficient energy protocols. Moreover, if the collected data are erroneous and untrustworthy, the data protection and application becomes an unrealistic goal, which further leads to unnecessary energy cost. However, the traditional methods cannot solve this problem efficiently and trustworthily. To achieve this goal, in this paper we design a novel energy-efficient and trustworthy protocol based on mobile fog computing. By establishing a trust model on fog elements to evaluate the sensor nodes, the mobile data collection path with the largest utility value is generated, which can avoid visiting unnecessary sensors and collecting untrustworthy data. Theoretical analysis and experimental results validate that our proposed architecture and method outperform traditional data collection methods in both energy and delay.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Massive MIMO for next generation wireless systems Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Implementing Vehicle Routing Algorithms
Finite-approximation-error-based discrete-time iterative adaptive dynamic programming. In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The ...
An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation. •The proposed watermarking scheme utilized improved discrete wavelet transformation (IDWT) to retrieve the invariant wavelet domain.•The entropy mechanism is used to identify the suitable region for insertion of watermark. This will improve the imperceptibility and robustness of the watermarking procedure.•The scaling factors such as PSNR and NC are considered for evaluation of the proposed method and the Particle Swarm Optimization is employed to optimize the scaling factors.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Congestion-Aware Multi-Drone Delivery Routing Framework Drones have been attracting the attention of diverse industries thanks to their superior maneuverability. Logistics companies especially keep trying to utilize drones for fast delivery following the growing market size of e-commerce. Accordingly, methods for safely operating multi-drone have been researched, and many researchers have proposed various optimal or near-optimal routing methods. However, such methods have some problems that cause routing failures or huge routing computation time in a drone-dense space due to many collisions. In this paper, we propose a centralized framework that deals with enormous collisions and obtains collision-free paths rapidly. We first build a drone energy consumption model with a data-driven method using flight experiment data of a commercial drone to estimate the drone battery state-of-charge (SoC). Then, we develop a novel routing method that generates collision-free paths by considering both the congestion of the space and the SoC of each drone. The proposed method is inspired by the VLSI circuit routing method that connects all signal nets among thousands of logic components. Through numerous delivery routing simulations, we confirm that the proposed method achieves a maximum of 6 times higher routing success rate with a 10x faster runtime compared with the state-of-the-art optimal method. In addition, we validate that the proposed method is applicable to delivery routing problems with various drone battery capacities.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Pareto-optimal resilient controller placement in SDN-based core networks With the introduction of Software Defined Networking (SDN), the concept of an external and optionally centralized network control plane, i.e. controller, is drawing the attention of researchers and industry. A particularly important task in the SDN context is the placement of such external resources in the network. In this paper, we discuss important aspects of the controller placement problem with a focus on SDN-based core networks, including different types of resilience and failure tolerance. When several performance and resilience metrics are considered, there is usually no single best controller placement solution, but a trade-off between these metrics. We introduce our framework for resilient Pareto-based Optimal COntroller-placement (POCO) that provides the operator of a network with all Pareto-optimal placements. The ideas and mechanisms are illustrated using the Internet2 OS3E topology and further evaluated on more than 140 topologies of the Topology Zoo. In particular, our findings reveal that for most of the topologies more than 20% of all nodes need to be controllers to assure a continuous connection of all nodes to one of the controllers in any arbitrary double link or node failure scenario.
Minimum interference routing of bandwidth guaranteed tunnels with MPLS traffic engineering applications This paper presents new algorithms for dynamic routing of bandwidth guaranteed tunnels, where tunnel routing requests arrive one by one and there is no a priori knowledge regarding future requests. This problem is motivated by the service provider needs for fast deployment of bandwidth guaranteed services. Offline routing algorithms cannot be used since they require a priori knowledge of all tunnel requests that are to be rooted. Instead, on-line algorithms that handle requests arriving one by one and that satisfy as many potential future demands as possible are needed. The newly developed algorithms are on-line algorithms and are based on the idea that a newly routed tunnel must follow a route that does not “interfere too much” with a route that may he critical to satisfy a future demand. We show that this problem is NP-hard. We then develop path selection heuristics which are based on the idea of deferred loading of certain “critical” links. These critical links are identified by the algorithm as links that, if heavily loaded, would make it impossible to satisfy future demands between certain ingress-egress pairs. Like min-hop routing, the presented algorithm uses link-state information and some auxiliary capacity information for path selection. Unlike previous algorithms, the proposed algorithm exploits any available knowledge of the network ingress-egress points of potential future demands, even though the demands themselves are unknown. If all nodes are ingress-egress nodes, the algorithm can still be used, particularly to reduce the rejection rate of requests between a specified subset of important ingress-egress pairs. The algorithm performs well in comparison to previously proposed algorithms on several metrics like the number of rejected demands and successful rerouting of demands upon link failure
The set cover with pairs problem We consider a generalization of the set cover problem, in which elements are covered by pairs of objects, and we are required to find a minimum cost subset of objects that induces a collection of pairs covering all elements. Formally, let U be a ground set of elements and let ${\cal S}$ be a set of objects, where each object i has a non-negative cost wi. For every $\{ i, j \} \subseteq {\cal S}$, let ${\cal C}(i,j)$ be the collection of elements in U covered by the pair { i, j }. The set cover with pairs problem asks to find a subset $A \subseteq {\cal S}$ such that $\bigcup_{ \{ i, j \} \subseteq A } {\cal C}(i,j) = U$ and such that ∑i∈Awi is minimized. In addition to studying this general problem, we are also concerned with developing polynomial time approximation algorithms for interesting special cases. The problems we consider in this framework arise in the context of domination in metric spaces and separation of point sets.
Renaissance: A Self-Stabilizing Distributed SDN Control Plane By introducing programmability, automated verification, and innovative debugging tools, Software-Defined Networks (SDNs) are poised to meet the increasingly stringent dependability requirements of today's communication networks. However, the design of fault-tolerant SDNs remains an open challenge. This paper considers the design of dependable SDNs through the lenses of self-stabilization - a very strong notion of fault-tolerance. In particular, we develop algorithms for an in-band and distributed control plane for SDNs, called Renaissance, which tolerates a wide range of (concurrent) controller, link, and communication failures. Our self-stabilizing algorithms ensure that after the occurrence of an arbitrary combination of failures, (i) every non-faulty SDN controller can eventually reach any switch in the network within a bounded communication delay (in the presence of a bounded number of concurrent failures) and (ii) every switch is managed by at least one non-faulty controller. We evaluate Renaissance through a rigorous worst-case analysis as well as a prototype implementation (based on OVS and Floodlight), and we report on our experiments using Mininet.
Enumerating Connected Induced Subgraphs: Improved Delay And Experimental Comparison We consider the problem of enumerating all connected induced subgraphs of order k in an undirected graph G = (V, E). Our main results are two enumeration algorithms with a delay of O(k(2)Delta) where Delta is the maximum degree in the input graph. This improves upon a previous delay bound (Elbassioni, 2015) for this problem. Moreover, we show that these two algorithms can be adapted to give algorithms for the problem of enumerating all connected induced subgraphs of order at most k with a delay of O(k + Delta). Finally, we perform an experimental comparison of several enumeration algorithms for k <= 10 and k >= vertical bar V vertical bar - 3. (C) 2020 Elsevier B.V. All rights reserved.
A sub-constant error-probability low-degree test, and a sub-constant error-probability PCP characterization of NP
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
A survey of socially interactive robots This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
Energy Efficiency Resource Allocation For D2d Communication Network Based On Relay Selection In order to solve the problem of spectrum resource shortage and energy consumption, we put forward a new model that combines with D2D communication and energy harvesting technology: energy harvesting-aided D2D communication network under the cognitive radio (EHA-CRD), where the D2D users harvest energy from the base station and the D2D source communicate with D2D destination by D2D relays. Our goals are to investigate the maximization energy efficiency (EE) of the network by joint time allocation and relay selection while taking into the constraints of the signal-to-noise ratio of D2D and the rates of the Cellular users. During this process, the energy collection time and communication time are randomly allocated. The maximization problem of EE can be divided into two sub-problems: (1) relay selection problem; (2) time optimization problem. For the first sub-problem, we propose a weighted sum maximum algorithm to select the best relay. For the last sub-problem, the EE maximization problem is non-convex problem with time. Thus, by using fractional programming theory, we transform it into a standard convex optimization problem, and we propose the optimization iterative algorithm to solve the convex optimization problem for obtaining the optimal solution. And, the simulation results show that the proposed relay selection algorithm and time optimization algorithm are significantly improved compared with the existing algorithms.
A communication robot in a shopping mall This paper reports our development of a communication robot for use in a shopping mall to provide shopping information, offer route guidance, and build rapport. In the development, the major difficulties included sensing human behaviors, conversation in a noisy daily environment, and the needs of unexpected miscellaneous knowledge in the conversation. We chose a networkrobot system approach, where a single robot's poor sensing capability and knowledge are supplemented by ubiquitous sensors and a human operator. The developed robot system detects a person with floor sensors to initiate interaction, identifies individuals with radio-frequency identification (RFID) tags, gives shopping information while chatting, and provides route guidance with deictic gestures. The robotwas partially teleoperated to avoid the difficulty of speech recognition as well as to furnish a new kind of knowledge that only humans can flexibly provide. The information supplied by a human operator was later used to increase the robot's autonomy. For 25 days in a shopping mall, we conducted a field trial and gathered 2642 interactions. A total of 235 participants signed up to use RFID tags and, later, provided questionnaire responses. The questionnaire results are promising in terms of the visitors' perceived acceptability as well as the encouragement of their shopping activities. The results of the teleoperation analysis revealed that the amount of teleoperation gradually decreased, which is also promising.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
An Automatic Screening Approach for Obstructive Sleep Apnea Diagnosis Based on Single-Lead Electrocardiogram Traditional approaches for obstructive sleep apnea (OSA) diagnosis are apt to using multiple channels of physiological signals to detect apnea events by dividing the signals into equal-length segments, which may lead to incorrect apnea event detection and weaken the performance of OSA diagnosis. This paper proposes an automatic-segmentation-based screening approach with the single channel of Electrocardiogram (ECG) signal for OSA subject diagnosis, and the main work of the proposed approach lies in three aspects: (i) an automatic signal segmentation algorithm is adopted for signal segmentation instead of the equal-length segmentation rule; (ii) a local median filter is improved for reduction of the unexpected RR intervals before signal segmentation; (iii) the designed OSA severity index and additional admission information of OSA suspects are plugged into support vector machine (SVM) for OSA subject diagnosis. A real clinical example from PhysioNet database is provided to validate the proposed approach and an average accuracy of 97.41% for subject diagnosis is obtained which demonstrates the effectiveness for OSA diagnosis.
A robust medical image watermarking against salt and pepper noise for brain MRI images. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. During the transmission of medical images between hospitals or specialists through the network, the main priority is to protect a patient's documents against any act of tampering by unauthorised individuals. Because of this, there is a need for medical image authentication scheme to enable proper diagnosis on patient. In addition, medical images are also susceptible to salt and pepper impulse noise through the transmission in communication channels. This noise may also be intentionally used by the invaders to corrupt the embedded watermarks inside the medical images. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this work addresses the issue of designing a new watermarking method that can withstand high density of salt and pepper noise for brain MRI images. For this purpose, combination of a spatial domain watermarking method, channel coding and noise filtering schemes are used. The region of non-interest (RONI) of MRI images from five different databases are used as embedding area and electronic patient record (EPR) is considered as embedded data. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER).
Convert Harm Into Benefit: A Coordination-Learning Based Dynamic Spectrum Anti-Jamming Approach This paper mainly investigates the multi-user anti-jamming spectrum access problem. Using the idea of “converting harm into benefit,” the malicious jamming signals projected by the enemy are utilized by the users as the coordination signals to guide spectrum coordination. An “internal coordination-external confrontation” multi-user anti-jamming access game model is constructed, and the existence of Nash equilibrium (NE) as well as correlated equilibrium (CE) is demonstrated. A coordination-learning based anti-jamming spectrum access algorithm (CLASA) is designed to achieve the CE of the game. Simulation results show the convergence, and effectiveness of the proposed CLASA algorithm, and indicate that our approach can help users confront the malicious jammer, and coordinate internal spectrum access simultaneously without information exchange. Last but not least, the fairness of the proposed approach under different jamming attack patterns is analyzed, which illustrates that this approach provides fair anti-jamming spectrum access opportunities under complicated jamming pattern.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Mobility based network lifetime in wireless sensor networks: A review. Increasingly emerging technologies in micro-electromechanical systems and wireless communications allows mobile wireless sensor networks (MWSNs) to be a more and more powerful mean in many applications such as habitat and environmental monitoring, traffic observing, battlefield surveillance, smart homes and smart cities. Nevertheless, due to sensor battery constraints, energy-efficiently operating an MWSN is paramount importance in those applications; and a plethora of approaches have been proposed to elongate the network longevity at most possible. Therefore, this paper provides a comprehensive review on the developed methods that exploit mobility of sensor nodes and/or sink(s) to effectively maximize the lifetime of an MWSN. The survey systematically classifies the algorithms into categories where the MWSN is equipped with mobile sensor nodes, one mobile sink or multiple mobile sinks. How to drive the mobile sink(s) for energy efficiency in the network is also fully reviewed and reported.
Mobility in wireless sensor networks - Survey and proposal. Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time.
Tag-based cooperative data gathering and energy recharging in wide area RFID sensor networks The Wireless Identification and Sensing Platform (WISP) conjugates the identification potential of the RFID technology and the sensing and computing capability of the wireless sensors. Practical issues, such as the need of periodically recharging WISPs, challenge the effective deployment of large-scale RFID sensor networks (RSNs) consisting of RFID readers and WISP nodes. In this view, the paper proposes cooperative solutions to energize the WISP devices in a wide-area sensing network while reducing the data collection delay. The main novelty is the fact that both data transmissions and energy transfer are based on the RFID technology only: RFID mobile readers gather data from the WISP devices, wirelessly recharge them, and mutually cooperate to reduce the data delivery delay to the sink. Communication between mobile readers relies on two proposed solutions: a tag-based relay scheme, where RFID tags are exploited to temporarily store sensed data at pre-determined contact points between the readers; and a tag-based data channel scheme, where the WISPs are used as a virtual communication channel for real time data transfer between the readers. Both solutions require: (i) clustering the WISP nodes; (ii) dimensioning the number of required RFID mobile readers; (iii) planning the tour of the readers under the energy and time constraints of the nodes. A simulative analysis demonstrates the effectiveness of the proposed solutions when compared to non-cooperative approaches. Differently from classic schemes in the literature, the solutions proposed in this paper better cope with scalability issues, which is of utmost importance for wide area networks.
Improving charging capacity for wireless sensor networks by deploying one mobile vehicle with multiple removable chargers. Wireless energy transfer is a promising technology to prolong the lifetime of wireless sensor networks (WSNs), by employing charging vehicles to replenish energy to lifetime-critical sensors. Existing studies on sensor charging assumed that one or multiple charging vehicles being deployed. Such an assumption may have its limitation for a real sensor network. On one hand, it usually is insufficient to employ just one vehicle to charge many sensors in a large-scale sensor network due to the limited charging capacity of the vehicle or energy expirations of some sensors prior to the arrival of the charging vehicle. On the other hand, although the employment of multiple vehicles can significantly improve the charging capability, it is too costly in terms of the initial investment and maintenance costs on these vehicles. In this paper, we propose a novel charging model that a charging vehicle can carry multiple low-cost removable chargers and each charger is powered by a portable high-volume battery. When there are energy-critical sensors to be charged, the vehicle can carry the chargers to charge multiple sensors simultaneously, by placing one portable charger in the vicinity of one sensor. Under this novel charging model, we study the scheduling problem of the charging vehicle so that both the dead duration of sensors and the total travel distance of the mobile vehicle per tour are minimized. Since this problem is NP-hard, we instead propose a (3+ϵ)-approximation algorithm if the residual lifetime of each sensor can be ignored; otherwise, we devise a novel heuristic algorithm, where ϵ is a given constant with 0 < ϵ ≤ 1. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Experimental results show that the performance of the proposed algorithms are very promising.
Speed control of mobile chargers serving wireless rechargeable networks. Wireless rechargeable networks have attracted increasing research attention in recent years. For charging service, a mobile charger is often employed to move across the network and charge all network nodes. To reduce the charging completion time, most existing works have used the “move-then-charge” model where the charger first moves to specific spots and then starts charging nodes nearby. As a result, these works often aim to reduce the moving delay or charging delay at the spots. However, the charging opportunity on the move is largely overlooked because the charger can charge network nodes while moving, which as we analyze in this paper, has the potential to greatly reduce the charging completion time. The major challenge to exploit the charging opportunity is the setting of the moving speed of the charger. When the charger moves slow, the charging delay will be reduced (more energy will be charged during the movement) but the moving delay will increase. To deal with this challenge, we formulate the problem of delay minimization as a Traveling Salesman Problem with Speed Variations (TSP-SV) which jointly considers both charging and moving delay. We further solve the problem using linear programming to generate (1) the moving path of the charger, (2) the moving speed variations on the path and (3) the stay time at each charging spot. We also discuss possible ways to reduce the calculation complexity. Extensive simulation experiments are conducted to study the delay performance under various scenarios. The results demonstrate that our proposed method achieves much less completion time compared to the state-of-the-art work.
A Prediction-Based Charging Policy and Interference Mitigation Approach in the Wireless Powered Internet of Things The Internet of Things (IoT) technology has recently drawn more attention due to its ability to achieve the interconnections of massive physic devices. However, how to provide a reliable power supply to energy-constrained devices and improve the energy efficiency in the wireless powered IoT (WP-IoT) is a twofold challenge. In this paper, we develop a novel wireless power transmission (WPT) system, where an unmanned aerial vehicle (UAV) equipped with radio frequency energy transmitter charges the IoT devices. A machine learning framework of echo state networks together with an improved <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -means clustering algorithm is used to predict the energy consumption and cluster all the sensor nodes at the next period, thus automatically determining the charging strategy. The energy obtained from the UAV by WPT supports the IoT devices to communicate with each other. In order to improve the energy efficiency of the WP-IoT system, the interference mitigation problem is modeled as a mean field game, where an optimal power control policy is presented to adapt and analyze the large number of sensor nodes randomly deployed in WP-IoT. The numerical results verify that our proposed dynamic charging policy effectively reduces the data packet loss rate, and that the optimal power control policy greatly mitigates the interference, and improve the energy efficiency of the whole network.
Design of Self-sustainable Wireless Sensor Networks with Energy Harvesting and Wireless Charging AbstractEnergy provisioning plays a key role in the sustainable operations of Wireless Sensor Networks (WSNs). Recent efforts deploy multi-source energy harvesting sensors to utilize ambient energy. Meanwhile, wireless charging is a reliable energy source not affected by spatial-temporal ambient dynamics. This article integrates multiple energy provisioning strategies and adaptive adjustment to accomplish self-sustainability under complex weather conditions. We design and optimize a three-tier framework with the first two tiers focusing on the planning problems of sensors with various types and distributed energy storage powered by environmental energy. Then we schedule the Mobile Chargers (MC) between different charging activities and propose an efficient, 4-factor approximation algorithm. Finally, we adaptively adjust the algorithms to capture real-time energy profiles and jointly optimize those correlated modules. Our extensive simulations demonstrate significant improvement of network lifetime (\(\)), increase of harvested energy (15%), reduction of network cost (30%), and the charging capability of MC by 100%.
Minimizing the Maximum Charging Delay of Multiple Mobile Chargers Under the Multi-Node Energy Charging Scheme Wireless energy charging has emerged as a very promising technology for prolonging sensor lifetime in wireless rechargeable sensor networks (WRSNs). Existing studies focused mainly on the one-to-one charging scheme that a single sensor can be charged by a mobile charger at each time, this charging scheme however suffers from poor charging scalability and inefficiency. Recently, another charging scheme, the multi-node charging scheme that allows multiple sensors to be charged simultaneously by a mobile charger, becomes dominant, which can mitigate charging scalability and improve charging efficiency. However, most previous studies on this multi-node energy charging scheme focused on the use of a single mobile charger to charge multiple sensors simultaneously. For large scale WRSNs, it is insufficient to deploy only a single mobile charger to charge many lifetime-critical sensors, and consequently sensor expiration durations will increase dramatically. To charge many lifetime-critical sensors in large scale WRSNs as early as possible, it is inevitable to adopt multiple mobile chargers for sensor charging that can not only speed up sensor charging but also reduce expiration times of sensors. This however poses great challenges to fairly schedule the multiple mobile chargers such that the longest charging delay among sensors is minimized. One important constraint is that no sensor can be charged by more than one mobile charger at any time due to the fact that the sensor cannot receive any energy from either of the chargers or the overcharging will damage the recharging battery of the sensor. Thus, finding a closed charge tour for each of the multiple chargers such that the longest charging delay is minimized is crucial. In this paper we address the challenge by formulating a novel longest charging delay minimization problem. We first show that the problem is NP-hard. We then devise the very first approximation algorithm with a provable approximation ratio for the problem. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithm is promising, and outperforms existing algorithms in various settings.
NETWRAP: An NDN Based Real-TimeWireless Recharging Framework for Wireless Sensor Networks Using vehicles equipped with wireless energy transmission technology to recharge sensor nodes over the air is a game-changer for traditional wireless sensor networks. The recharging policy regarding when to recharge which sensor nodes critically impacts the network performance. So far only a few works have studied such recharging policy for the case of using a single vehicle. In this paper, we propose NETWRAP, an N DN based Real Time Wireless Rech arging Protocol for dynamic wireless recharging in sensor networks. The real-time recharging framework supports single or multiple mobile vehicles. Employing multiple mobile vehicles provides more scalability and robustness. To efficiently deliver sensor energy status information to vehicles in real-time, we leverage concepts and mechanisms from named data networking (NDN) and design energy monitoring and reporting protocols. We derive theoretical results on the energy neutral condition and the minimum number of mobile vehicles required for perpetual network operations. Then we study how to minimize the total traveling cost of vehicles while guaranteeing all the sensor nodes can be recharged before their batteries deplete. We formulate the recharge optimization problem into a Multiple Traveling Salesman Problem with Deadlines (m-TSP with Deadlines), which is NP-hard. To accommodate the dynamic nature of node energy conditions with low overhead, we present an algorithm that selects the node with the minimum weighted sum of traveling time and residual lifetime. Our scheme not only improves network scalability but also ensures the perpetual operation of networks. Extensive simulation results demonstrate the effectiveness and efficiency of the proposed design. The results also validate the correctness of the theoretical analysis and show significant improvements that cut the number of nonfunctional nodes by half compared to the static scheme while maintaining the network overhead at the same level.
Hierarchical mesh segmentation based on fitting primitives In this paper, we describe a hierarchical face clustering algorithm for triangle meshes based on fitting primitives belonging to an arbitrary set. The method proposed is completely automatic, and generates a binary tree of clusters, each of which is fitted by one of the primitives employed. Initially, each triangle represents a single cluster; at every iteration, all the pairs of adjacent clusters are considered, and the one that can be better approximated by one of the primitives forms a new single cluster. The approximation error is evaluated using the same metric for all the primitives, so that it makes sense to choose which is the most suitable primitive to approximate the set of triangles in a cluster.Based on this approach, we have implemented a prototype that uses planes, spheres and cylinders, and have experimented that for meshes made of 100 K faces, the whole binary tree of clusters can be built in about 8 s on a standard PC.The framework described here has natural application in reverse engineering processes, but it has also been tested for surface denoising, feature recovery and character skinning.
Movie2Comics: Towards a Lively Video Content Presentation a type of artwork, comics is prevalent and popular around the world. However, despite the availability of assistive software and tools, the creation of comics is still a labor-intensive and time-consuming process. This paper proposes a scheme that is able to automatically turn a movie clip to comics. Two principles are followed in the scheme: 1) optimizing the information preservation of the movie; and 2) generating outputs following the rules and the styles of comics. The scheme mainly contains three components: script-face mapping, descriptive picture extraction, and cartoonization. The script-face mapping utilizes face tracking and recognition techniques to accomplish the mapping between characters' faces and their scripts. The descriptive picture extraction then generates a sequence of frames for presentation. Finally, the cartoonization is accomplished via three steps: panel scaling, stylization, and comics layout design. Experiments are conducted on a set of movie clips and the results have demonstrated the usefulness and the effectiveness of the scheme.
Parallel Multi-Block ADMM with o(1/k) Convergence This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: $$\\begin{aligned} \\text {minimize} ~~&f_1(\\mathbf{x}_1) + \\cdots + f_N(\\mathbf{x}_N)\\\\ \\text {subject to}~~&A_1 \\mathbf{x}_1 ~+ \\cdots + A_N\\mathbf{x}_N =c,\\\\&\\mathbf{x}_1\\in {\\mathcal {X}}_1,~\\ldots , ~\\mathbf{x}_N\\in {\\mathcal {X}}_N, \\end{aligned}$$minimizef1(x1)+ź+fN(xN)subject toA1x1+ź+ANxN=c,x1źX1,ź,xNźXN,where $$N \\ge 2$$Nź2, $$f_i$$fi are convex functions, $$A_i$$Ai are matrices, and $${\\mathcal {X}}_i$$Xi are feasible sets for variable $$\\mathbf{x}_i$$xi. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices $$A_i$$Ai are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices $$A_i$$Ai). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that $$\\Vert {\\mathbf {x}}^{k+1} - {\\mathbf {x}}^k\\Vert _M^2$$źxk+1-xkźM2 converges at a rate of o(1 / k) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with 300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.
Deep Continuous Fusion For Multi-Sensor 3d Object Detection In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
Stochastic QoE-aware optimization of multisource multimedia content delivery for mobile cloud The increasing popularity of mobile video streaming in wireless networks has stimulated growing demands for efficient video streaming services. However, due to the time-varying throughput and user mobility, it is still difficult to provide high quality video services for mobile users. Our proposed optimization method considers key factors such as video quality, bitrate level, and quality variations to enhance quality of experience over wireless networks. The mobile network and device parameters are estimated in order to deliver the best quality video for the mobile user. We develop a rate adaptation algorithm using Lyapunov optimization for multi-source multimedia content delivery to minimize the video rate switches and provide higher video quality. The multi-source manager algorithm is developed to select the best stream based on the path quality for each path. The node joining and cluster head election mechanism update the node information. As the proposed approach selects the optimal path, it also achieves fairness and stability among clients. The quality of experience feature metrics like bitrate level, rebuffering events, and bitrate switch frequency are employed to assess video quality. We also employ objective video quality assessment methods like VQM, MS-SSIM, and SSIMplus for video quality measurement closer to human visual assessment. Numerical results show the effectiveness of the proposed method as compared to the existing state-of-the-art methods in providing quality of experience and bandwidth utilization.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
Unsupervised transfer learning for anomaly detection: Application to complementary operating condition transfer In industrial applications, anomaly detectors are trained to raise alarms when measured samples deviate from the training data distribution. The samples used to train the model should, therefore, be sufficient in quantity and representative of all healthy operating conditions. However, for systems subject to changing operating conditions, acquiring such comprehensive datasets requires a long collection period.
Squeezed Convolutional Variational AutoEncoder for unsupervised anomaly detection in edge device industrial Internet of Things In this paper, we propose Squeezed Convolutional Variational AutoEncoder (SCVAE) for anomaly detection in time series data for Edge Computing in Industrial Internet of Things (IIoT). The proposed model is applied to labeled time series data from UCI datasets for exact performance evaluation, and applied to real world data for indirect model performance comparison. In addition, by comparing the models before and after applying Fire Modules from SqueezeNet, we show that model size and inference times are reduced while similar levels of performance is maintained.
Time Series Anomaly Detection for Trustworthy Services in Cloud Computing Systems As a powerful architecture for large-scale computation, cloud computing has revolutionized the way that computing infrastructure is abstracted and utilized. Coupled with the challenges caused by Big Data, the rocketing development of cloud computing boosts the complexity of system management and maintenance, resulting in weakened trustworthiness of cloud services. To cope with this problem, a comp...
Deep Learning Based Anomaly Detection in Water Distribution Systems Water distribution system (WDS) is one of the most essential infrastructures all over the world. However, incidents such as natural disasters, accidents and intentional damages are endangering the safety of drinking water. With the advance of sensor technologies, different kinds of sensors are being deployed to monitor operative and quality indicators such as flow rate, pH, turbidity, the amount of chlorine dioxide etc. This brings the possibility to detect anomalies in real time based on the data collected from the sensors and different kinds of methods have been applied to tackle this task such as the traditional machine learning methods (e.g. logistic regression, support vector machine, random forest). Recently, researchers tried to apply the deep learning methods (e.g. RNN, CNN) for WDS anomaly detection but the results are worse than that of the traditional machine learning methods. In this paper, by taking into account the characteristics of the WDS monitoring data, we integrate sequence-to-point learning and data balancing with the deep learning model Long Short-term Memory (LSTM) for the task of anomaly detection in WDSs. With a public data set, we show that by choosing an appropriate input length and balance the training data our approach achieves better F1 score than the state-of-the-art method in the literature.
Explainable Ai: A Review Of Machine Learning Interpretability Methods Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into "black box" approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
The Dangers of Post-hoc Interpretability - Unjustified Counterfactual Explanations. Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justified from unjustified counterfactual examples, leading to less useful explanations.
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Pors: proofs of retrievability for large files In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
On controller initialization in multivariable switching systems We consider a class of switched systems which consists of a linear MIMO and possibly unstable process in feedback interconnection with a multicontroller whose dynamics switch. It is shown how one can achieve significantly better transient performance by selecting the initial condition for every controller when it is inserted into the feedback loop. This initialization is obtained by performing the minimization of a quadratic cost function of the tracking error, controlled output, and control signal. We guarantee input-to-state stability of the closed-loop system when the average number of switches per unit of time is smaller than a specific value. If this is not the case then stability can still be achieved by adding a mild constraint to the optimization. We illustrate the use of our results in the control of a flexible beam actuated in torque. This system is unstable with two poles at the origin and contains several lightly damped modes, which can be easily excited by controller switching.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
Modeling taxi driver anticipatory behavior. As part of a wider behavioral agent-based model that simulates taxi drivers' dynamic passenger-finding behavior under uncertainty, we present a model of strategic behavior of taxi drivers in anticipation of substantial time varying demand at locations such as airports and major train stations. The model assumes that, considering a particular decision horizon, a taxi driver decides to transfer to such a destination based on a reward function. The dynamic uncertainty of demand is captured by a time dependent pick-up probability, which is a cumulative distribution function of waiting time. The model allows for information learning by which taxi drivers update their beliefs from past experiences. A simulation on a real road network, applied to test the model, indicates that the formulated model dynamically improves passenger-finding strategies at the airport. Taxi drivers learn when to transfer to the airport in anticipation of the time-varying demand at the airport to minimize their waiting time.
Convert Harm Into Benefit: A Coordination-Learning Based Dynamic Spectrum Anti-Jamming Approach This paper mainly investigates the multi-user anti-jamming spectrum access problem. Using the idea of “converting harm into benefit,” the malicious jamming signals projected by the enemy are utilized by the users as the coordination signals to guide spectrum coordination. An “internal coordination-external confrontation” multi-user anti-jamming access game model is constructed, and the existence of Nash equilibrium (NE) as well as correlated equilibrium (CE) is demonstrated. A coordination-learning based anti-jamming spectrum access algorithm (CLASA) is designed to achieve the CE of the game. Simulation results show the convergence, and effectiveness of the proposed CLASA algorithm, and indicate that our approach can help users confront the malicious jammer, and coordinate internal spectrum access simultaneously without information exchange. Last but not least, the fairness of the proposed approach under different jamming attack patterns is analyzed, which illustrates that this approach provides fair anti-jamming spectrum access opportunities under complicated jamming pattern.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Dynamic Trajectory Planning for Vehicle Autonomous Driving Trajectory planning is one of the key and challenging tasks in autonomous driving. This paper proposes a novel method that dynamically plans trajectories, with the aim to achieve quick and safe reaction to the changing driving environment and optimal balance between vehicle performance and driving comfort. With the proposed method, such complex maneuvers can be decomposed into two sub-maneuvers, i.e., lane change and lane keeping, or their combinations, such that the trajectory planning is generalized and simplified, mainly based on lane change maneuvers. A two fold optimization-based method is proposed for stationary trajectory planning as well as dynamic trajectory planning in the presence of a dynamic traffic environment. Simulation is conducted to demonstrate the efficiency and effectiveness of the proposed method.
Using Ontology-Based Traffic Models for More Efficient Decision Making of Autonomous Vehicles The paper describes how a high-level abstract world model can be used to support the decision-making process of an autonomous driving system. The approach uses a hierarchical world model and distinguishes between a low-level model for the trajectory planning and a high-level model for solving the traffic coordination problem. The abstract world model used in the CyberCars-2 project is presented. It is based on a topological lane segmentation and introduces relations to represent the semantic context of the traffic scenario. This makes it much easier to realize a consistent and complete driving control system, and to analyze, evaluate and simulate such a system.
Ontology-based methods for enhancing autonomous vehicle path planning We report the results of a first implementation demonstrating the use of an ontology to support reasoning about obstacles to improve the capabilities and performance of on-board route planning for autonomous vehicles. This is part of an overall effort to evaluate the performance of ontologies in different components of an autonomous vehicle within the 4D/RCS system architecture developed at NIST. Our initial focus has been on simple roadway driving scenarios where the controlled vehicle encounters potential obstacles in its path. As reported elsewhere [C. Schlenoff, S. Balakirsky, M. Uschold, R. Provine, S. Smith, Using ontologies to aid navigation planning in autonomous vehicles, Knowledge Engineering Review 18 (3) (2004) 243–255], our approach is to develop an ontology of objects in the environment, in conjunction with rules for estimating the damage that would be incurred by collisions with different objects in different situations. Automated reasoning is used to estimate collision damage; this information is fed to the route planner to help it decide whether to plan to avoid the object. We describe the results of the first implementation that integrates the ontology, the reasoner and the planner. We describe our insights and lessons learned and discuss resulting changes to our approach.
Extracting Traffic Primitives Directly from Naturalistically Logged Data for Self-Driving Applications. Developing an automated vehicle, that can handle complicated driving scenarios and appropriately interact with other road users, requires the ability to semantically learn and understand driving environment, oftentimes, based on analyzing massive amounts of naturalistic driving data. An important paradigm that allows automated vehicles to both learn from human drivers and gain insights is understa...
DeepRoad: GAN-based metamorphic testing and input validation framework for autonomous driving systems. While Deep Neural Networks (DNNs) have established the fundamentals of image-based autonomous driving systems, they may exhibit erroneous behaviors and cause fatal accidents. To address the safety issues in autonomous driving systems, a recent set of testing techniques have been designed to automatically generate artificial driving scenes to enrich test suite, e.g., generating new input images transformed from the original ones. However, these techniques are insufficient due to two limitations: first, many such synthetic images often lack diversity of driving scenes, and hence compromise the resulting efficacy and reliability. Second, for machine-learning-based systems, a mismatch between training and application domain can dramatically degrade system accuracy, such that it is necessary to validate inputs for improving system robustness. In this paper, we propose DeepRoad, an unsupervised DNN-based framework for automatically testing the consistency of DNN-based autonomous driving systems and online validation. First, DeepRoad automatically synthesizes large amounts of diverse driving scenes without using image transformation rules (e.g. scale, shear and rotation). In particular, DeepRoad is able to produce driving scenes with various weather conditions (including those with rather extreme conditions) by applying Generative Adversarial Networks (GANs) along with the corresponding real-world weather scenes. Second, DeepRoad utilizes metamorphic testing techniques to check the consistency of such systems using synthetic images. Third, DeepRoad validates input images for DNN-based systems by measuring the distance of the input and training images using their VGGNet features. We implement DeepRoad to test three well-recognized DNN-based autonomous driving systems in Udacity self-driving car challenge. The experimental results demonstrate that DeepRoad can detect thousands of inconsistent behaviors for these systems, and effectively validate input images to potentially enhance the system robustness as well.
Automatically testing self-driving cars with search-based procedural content generation Self-driving cars rely on software which needs to be thoroughly tested. Testing self-driving car software in real traffic is not only expensive but also dangerous, and has already caused fatalities. Virtual tests, in which self-driving car software is tested in computer simulations, offer a more efficient and safer alternative compared to naturalistic field operational tests. However, creating suitable test scenarios is laborious and difficult. In this paper we combine procedural content generation, a technique commonly employed in modern video games, and search-based testing, a testing technique proven to be effective in many domains, in order to automatically create challenging virtual scenarios for testing self-driving car soft- ware. Our AsFault prototype implements this approach to generate virtual roads for testing lane keeping, one of the defining features of autonomous driving. Evaluation on two different self-driving car software systems demonstrates that AsFault can generate effective virtual road networks that succeed in revealing software failures, which manifest as cars departing their lane. Compared to random testing AsFault was not only more efficient, but also caused up to twice as many lane departures.
Acclimatizing the Operational Design Domain for Autonomous Driving Systems The operational design domain (ODD) of an automated driving system (ADS) can be used to confine the environmental scope of where the ADS is safe to execute. ODD acclimatization is one of the necessary steps for validating vehicle safety in complex traffic environments. This article proposes an approach and architectural design to extract and enhance the ODD of the ADS based on the task scenario an...
Accelerated Evaluation of Automated Vehicles Safety in Lane-Change Scenarios Based on Importance Sampling Techniques Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs.
A survey of socially interactive robots This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
A General Equilibrium Model for Industries with Price and Service Competition This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers' prices and all retailers' service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty.We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms' service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game.We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service.
Load Scheduling and Dispatch for Aggregators of Plug-In Electric Vehicles This paper proposes an operating framework for aggregators of plug-in electric vehicles (PEVs). First, a minimum-cost load scheduling algorithm is designed, which determines the purchase of energy in the day-ahead market based on the forecast electricity price and PEV power demands. The same algorithm is applicable for negotiating bilateral contracts. Second, a dynamic dispatch algorithm is developed, used for distributing the purchased energy to PEVs on the operating day. Simulation results are used to evaluate the proposed algorithms, and to demonstrate the potential impact of an aggregated PEV fleet on the power system.
An Efficient Non-Negative Matrix-Factorization-Based Approach to Collaborative Filtering for Recommender Systems Matrix-factorization (MF)-based approaches prove to be highly accurate and scalable in addressing collaborative filtering (CF) problems. During the MF process, the non-negativity, which ensures good representativeness of the learnt model, is critically important. However, current non-negative MF (NMF) models are mostly designed for problems in computer vision, while CF problems differ from them due to their extreme sparsity of the target rating-matrix. Currently available NMF-based CF models are based on matrix manipulation and lack practicability for industrial use. In this work, we focus on developing an NMF-based CF model with a single-element-based approach. The idea is to investigate the non-negative update process depending on each involved feature rather than on the whole feature matrices. With the non-negative single-element-based update rules, we subsequently integrate the Tikhonov regularizing terms, and propose the regularized single-element-based NMF (RSNMF) model. RSNMF is especially suitable for solving CF problems subject to the constraint of non-negativity. The experiments on large industrial datasets show high accuracy and low-computational complexity achieved by RSNMF.
Driver Gaze Zone Estimation Using Convolutional Neural Networks: A General Framework and Ablative Analysis Driver gaze has been shown to be an excellent surrogate for driver attention in intelligent vehicles. With the recent surge of highly autonomous vehicles, driver gaze can be useful for determining the handoff time to a human driver. While there has been significant improvement in personalized driver gaze zone estimation systems, a generalized system which is invariant to different subjects, perspe...
Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem Group scheduling problems have attracted much attention owing to their many practical applications. This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time. It is originated from an important industrial process, i.e., wire rod and bar rolling process in steel production systems. Two objecti...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
Derivative-Free Placement Optimization for Multi-UAV Wireless Networks with Channel Knowledge Map This paper studies a multi-UAV wireless network, in which multiple UAV users share the same spectrum to send individual messages to their respectively associated ground base stations (GBSs). The UAV users aim to optimize their locations to maximize the weighted sum rate. While most existing work considers simplified line-of-sight (LoS) or statistic air-to-ground (A2G) channel models, we exploit the location-specific channel knowledge map (CKM) to enhance the placement performance in practice. However, as the CKMs normally contain discrete site- and location-specific channel data without analytic model functions, the corresponding weighted sum rate function be-comes non-differentiable in general. In this case, conventional optimization techniques relying on function derivatives are inapplicable to solve the resultant placement optimization problem. To address this issue, we propose a novel iterative algorithm based on the derivative-free optimization. In each iteration, we first construct a quadratic function to approximate the non-differentiable weighted sum rate under a set of interpolation conditions, and then update the UAV s' placement locations by maximizing the approximate quadratic function subject to a trust region constraint. Numerical results show the convergence of the proposed algorithm. It is also shown that the proposed algorithm achieves a weighted sum rate close to the optimal design based on exhaustive search with much lower implementation complexity, and it significantly outperforms the conventional optimization method based on simplified LoS channel models and the heuristic design with each UAV hovering above its associated GBS.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
5G Virtualized Multi-access Edge Computing Platform for IoT Applications. The next generation of fifth generation (5G) network, which is implemented using Virtualized Multi-access Edge Computing (vMEC), Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, is a flexible and resilient network that supports various Internet of Things (IoT) devices. While NFV provides flexibility by allowing network functions to be dynamically deployed and inter-connected, vMEC provides intelligence at the edge of the mobile network reduces latency and increases the available capacity. With the diverse development of networking applications, the proposed vMEC use of Container-based Virtualization Technology (CVT) as gateway with IoT devices for flow control mechanism in scheduling and analysis methods will effectively increase the application Quality of Service (QoS). In this work, the proposed IoT gateway is analyzed. The combined effect of simultaneously deploying Virtual Network Functions (VNFs) and vMEC applications on a single network infrastructure, and critically in effecting exhibits low latency, high bandwidth and agility that will be able to connect large scale of devices. The proposed platform efficiently exploiting resources from edge computing and cloud computing, and takes IoT applications that adapt to network conditions to degrade an average 30% of end to end network latency.
Analysis of Software Aging in a Web Server Several recent studies have reported & examined the phenomenon that long-running software systems show an increasing failure rate and/or a progressive degradation of their performance. Causes of this phenomenon, which has been referred to as &#34;software aging&#34;, are the accumulation of internal error conditions, and the depletion of operating system resources. A proactive technique called &#34;software r...
Container Network Functions: Bringing NFV to the Network Edge. In order to cope with the increasing network utilization driven by new mobile clients, and to satisfy demand for new network services and performance guarantees, telecommunication service providers are exploiting virtualization over their network by implementing network services in virtual machines, decoupled from legacy hardware accelerated appliances. This effort, known as NFV, reduces OPEX and ...
Lifetime Extension of Software Execution Subject to Aging Software aging is a phenomenon of progressive degradation of software execution environment caused by software faults. In this paper, we propose software life-extension as an operational countermeasure against software aging and present the mathematical foundations of software life-extension by means of stochastic modeling. A semi-Markov process is used to capture the behavior of a system with sof...
Model-Driven Availability Assessment of the NFV-MANO With Software Rejuvenation Network Function Virtualization enables network operators to modernize their networks with greater elasticity, network programmability, and scalability. Exploiting these advantages requires new and specialized designs for management, automation, and orchestration systems which are capable of reliably operating and handling new elements such as virtual functions, virtualized infrastructures, and a ...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Massive MIMO for next generation wireless systems Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Communication theory of secrecy systems THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.1 In this paper a theory of secrecy systems is developed. The approach is on a theoretical level and is intended to complement the treatment found in standard works on cryptography.2 There, a detailed study is made of the many standard types of codes and ciphers, and of the ways of breaking them. We will be more concerned with the general mathematical structure and properties of secrecy systems.
A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization In recent years, there has been a growing interest for the experimental analysis in the field of evolutionary algorithms. It is noticeable due to the existence of numerous papers which analyze and propose different types of problems, such as the basis for experimental comparisons of algorithms, proposals of different methodologies in comparison or proposals of use of different statistical techniques in algorithms’ comparison.In this paper, we focus our study on the use of statistical techniques in the analysis of evolutionary algorithms’ behaviour over optimization problems. A study about the required conditions for statistical analysis of the results is presented by using some models of evolutionary algorithms for real-coding optimization. This study is conducted in two ways: single-problem analysis and multiple-problem analysis. The results obtained state that a parametric statistical analysis could not be appropriate specially when we deal with multiple-problem results. In multiple-problem analysis, we propose the use of non-parametric statistical tests given that they are less restrictive than parametric ones and they can be used over small size samples of results. As a case study, we analyze the published results for the algorithms presented in the CEC’2005 Special Session on Real Parameter Optimization by using non-parametric test procedures.
Implementing Vehicle Routing Algorithms
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
Neural network adaptive tracking control for a class of uncertain switched nonlinear systems. •Study the method of the tracking control of the switched uncertain nonlinear systems under arbitrary switching signal controller.•A multilayer neural network adaptive controller with multilayer weight norm adaptive estimation is been designed.•The adaptive law is expand from calculation the second layer weight of neural network to both of the two layers weight.•The controller proposed improve the tracking error performance of the closed-loop system greatly.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Joint service placement and request routing in mobile edge computing Mobile edge computing (MEC) is envisioned as a prospective technology that supports latency-critical and computation-intensive applications by using storage and computation resources in network edges. The advantages of this technology are trapped in limited edge cloud resources, and one of the prime challenges is how to allocate available edge cloud resources to satisfy user requests. However, previous works usually optimize service (data&code) placement and request routing simultaneously within the same timescale, ignoring the fact that frequent service replacement will incur expensive operating expenses. In this paper, we jointly optimize service placement and request routing in the MEC network for data analysis applications, under the constraints of computation and storage resource. In particular, the Cloud Radio Access Network (C-RAN) architecture is applied to pool available resources and realize load balancing among edge clouds. In addition, we adopt a two timescale framework to reduce high operating expenses caused by frequent cross-cloud service replication and replica deletion. Then, we develop a greedy-based approximation algorithm for service placement subproblem and a linear programming (LP) relaxation-based heuristic algorithm for request routing subproblem, respectively. Finally, the numerical results demonstrate that our proposed solution reaches 90% of the optimal performance in services homogeneous case and 76% in services heterogeneous case.
Delay-Aware Microservice Coordination in Mobile Edge Computing: A Reinforcement Learning Approach As an emerging service architecture, microservice enables decomposition of a monolithic web service into a set of independent lightweight services which can be executed independently. With mobile edge computing, microservices can be further deployed in edge clouds dynamically, launched quickly, and migrated across edge clouds easily, providing better services for users in proximity. However, the user mobility can result in frequent switch of nearby edge clouds, which increases the service delay when users move away from their serving edge clouds. To address this issue, this article investigates microservice coordination among edge clouds to enable seamless and real-time responses to service requests from mobile users. The objective of this work is to devise the optimal microservice coordination scheme which can reduce the overall service delay with low costs. To this end, we first propose a dynamic programming-based offline microservice coordination algorithm, that can achieve the globally optimal performance. However, the offline algorithm heavily relies on the availability of the prior information such as computation request arrivals, time-varying channel conditions and edge cloud's computation capabilities required, which is hard to be obtained. Therefore, we reformulate the microservice coordination problem using Markov decision process framework and then propose a reinforcement learning-based online microservice coordination algorithm to learn the optimal strategy. Theoretical analysis proves that the offline algorithm can find the optimal solution while the online algorithm can achieve near-optimal performance. Furthermore, based on two real-world datasets, i.e., the Telecom's base station dataset and Taxi Track dataset from Shanghai, experiments are conducted. The experimental results demonstrate that the proposed online algorithm outperforms existing algorithms in terms of service delay and migration costs, and the achieved performance is close to the optimal performance obtained by the offline algorithm.
Energy-Aware Task Offloading and Resource Allocation for Time-Sensitive Services in Mobile Edge Computing Systems Mobile Edge Computing (MEC) is a promising architecture to reduce the energy consumption of mobile devices and provide satisfactory quality-of-service to time-sensitive services. How to jointly optimize task offloading and resource allocation to minimize the energy consumption subject to the latency requirement remains an open problem, which motivates this paper. When the latency constraint is tak...
Dynamic Deployment and Cost-Sensitive Provisioning for Elastic Mobile Cloud Services. As mobile customers gradually occupying the largest share of cloud service users, the effective and cost-sensitive provisioning of mobile cloud services quickly becomes a main theme in cloud computing. The key issues involved are much more than just enabling mobile users to access remote cloud resources through wireless networks. The resource limited and intermittent disconnection problems of mobile environments have intrinsic conflict with the continuous connection assumption of the cloud service usage patterns. We advocate that seamless service provisioning in mobile cloud can only be achieved with full exploitation of all available resources around mobile users. An elastic framework is proposed to automatically and dynamically deploy cloud services on data center, base stations, client units, even peer devices. The best deployment location is dynamically determined based on a context-aware and cost-sensitive evaluation model. To facilitate easy adoption of the proposed framework, a service development model and associated semi-automatic tools are provided such that cloud service developers can easily convert a service for execution on different platforms without porting. Prototype implementation and evaluation on the Google Cloud and Android platforms demonstrate that our mechanism can successfully maintain seamless services with very low overhead.
Distributed and Dynamic Service Placement in Pervasive Edge Computing Networks The explosive growth of mobile devices promotes the prosperity of novel mobile applications, which can be realized by service offloading with the assistance of edge computing servers. However, due to limited computation and storage capabilities of a single server, long service latency hinders the continuous development of service offloading in mobile networks. By supporting multi-server cooperation, Pervasive Edge Computing (PEC) is promising to enable service migration in highly dynamic mobile networks. With the objective of maximizing the system utility, we formulate the optimization problem by jointly considering the constraints of server storage capability and service execution latency. To enable dynamic service placement, we first utilize Lyapunov optimization method to decompose the long-term optimization problem into a series of instant optimization problems. Then, a sample average approximation-based stochastic algorithm is proposed to approximate the future expected system utility. Afterwards, a distributed Markov approximation algorithm is utilized to determine the service placement configurations. Through theoretical analysis, the time complexity of our proposed algorithm is linear to the number of users, and the backlog queue of PEC servers is stable. Performance evaluations are conducted based on both synthetic and real trace-driven scenarios, with numerical results demonstrating the effectiveness of our proposed algorithm from various aspects.
A Cooperative Resource Allocation Model For Iot Applications In Mobile Edge Computing With the advancement in the development of the Internet of Things (IoT) technology, as well as the industrial IoT, various applications and services are benefiting from this emerging technology such as smart healthcare systems, virtual realities applications, connected and autonomous vehicles, to name a few. However, IoT devices are known for being limited computation capacities which is crucial to the device's availability time. Traditional approaches used to offload the applications to the cloud to ease the burden on the end user's devices, however, greater latency and network traffic issues still persist. Mobile Edge Computing (MEC) technology has emerged to address these issues and enhance the survivability of cloud infrastructure. While a lot of attempts have been made to manage an efficient process of applications offload, many of which either focus on the allocation of computational or communication protocols without considering a cooperative solution. In addition, a single-user scenario was considered. Therefore, we study multi-user IoT applications offloading for a MEC system, which cooperatively considers to allocate both the resources of computation and communication. The proposed system focuses on minimizing the weighted overhead of local IoT devices, and minimize the offload measured by the delay and energy consumption. The mathematical formulation is a typical mixed integer nonlinear programming (MINP), and this is an NP-hard problem. We obtain the solution to the objective function by splitting the objective problem into three sub-problems. Extensive set of evaluations have been performed so as to get the evaluation of the proposed model. The collected results indicate that offloading decisions, energy consumption, latency, and the impact of the number of IoT devices have shown superior improvement over traditional models.
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize. An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees' swarming around their hive is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used for optimizing multivariable functions and the results produced by ABC, Genetic Algorithm (GA), Particle Swarm Algorithm (PSO) and Particle Swarm Inspired Evolutionary Algorithm (PS-EA) have been compared. The results showed that ABC outperforms the other algorithms.
Markov games as a framework for multi-agent reinforcement learning In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
Scalable and efficient provable data possession. Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client's (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.
Cognitive Cars: A New Frontier for ADAS Research This paper provides a survey of recent works on cognitive cars with a focus on driver-oriented intelligent vehicle motion control. The main objective here is to clarify the goals and guidelines for future development in the area of advanced driver-assistance systems (ADASs). Two major research directions are investigated and discussed in detail: 1) stimuli–decisions–actions, which focuses on the driver side, and 2) perception enhancement–action-suggestion–function-delegation, which emphasizes the ADAS side. This paper addresses the important achievements and major difficulties of each direction and discusses how to combine the two directions into a single integrated system to obtain safety and comfort while driving. Other related topics, including driver training and infrastructure design, are also studied.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Camera communication deblurring: A semiblind spatial fractionally-spaced adaptive equalizer with flexible filter support design In Optical Camera Communication systems an important issue is the spatial intersymbol interference (blurred images) that can arise when Multi-Input Multi-Output techniques are applied. However, the transmitted symbols are described with very high resolution, due to the high number of pixels composing the camera. To take advantage of this characteristic, in this paper we use a semiblind spatial fractionally-spaced adaptive equalizer to counteract the blur introduced by the optical channel. We formulate the adaptive algorithm in a way that permits to design the support of the Finite Impulse Response filter with flexibility. The choice of the support is related to the spatial shape of the blur encountered, by following an heuristic approach. The equalizer performances in terms of Bit Error Rate are presented in the numerical results showing performance improvement. We also show the behaviour of the equalizer when different filter supports are used.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
CASNet: A Cross-Attention Siamese Network for Video Salient Object Detection. Recent works on video salient object detection have demonstrated that directly transferring the generalization ability of image-based models to video data without modeling spatial-temporal information remains nontrivial and challenging. Considering both intraframe accuracy and interframe consistency of saliency detection, this article presents a novel cross-attention based encoder–decoder model un...
Recall-Oriented Evaluation for Information Retrieval Systems. In a recall context, the user is interested in retrieving all relevant documents rather than retrieving a few that are at the top of the results list. In this article we propose ROM (Recall Oriented Measure) which takes into account the main elements that should be considered in evaluating information retrieval systems while ordering them in a way explicitly adapted to a recall context.
Tight Hardness Results for LCS and Other Sequence Similarity Measures Two important similarity measures between sequences are the longest common subsequence (LCS) and the dynamic time warping distance (DTWD). The computations of these measures for two given sequences are central tasks in a variety of applications. Simple dynamic programming algorithms solve these tasks in O(n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ) time, and despite an extensive amount of research, no algorithms with significantly better worst case upper bounds are known. In this paper, we show that for any constant ε >0, an O(n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2-ε</sup> ) time algorithm for computing the LCS or the DTWD of two sequences of length n over a constant size alphabet, refutes the popular Strong Exponential Time Hypothesis (SETH).
A Hierarchical Latent Structure for Variational Conversation Modeling. Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling. However, they suffer from the notorious degeneration problem, where the decoders learn to ignore latent variables and reduce to vanilla RNNs. We empirically show that this degeneracy occurs mostly due to two reasons. First, the expressive power of hierarchical RNN decoders is often high enough to model the data using only its decoding distributions without relying on the latent variables. Second, the conditional VAE structure whose generation process is conditioned on a context, makes the range of training targets very sparse; that is, the RNN decoders can easily overfit to the training data ignoring the latent variables. To solve the degeneration problem, we propose a novel model named Variational Hierarchical Conversation RNNs (VHCR), involving two key ideas of (1) using a hierarchical structure of latent variables, and (2) exploiting an utterance drop regularization. With evaluations on two datasets of Cornell Movie Dialog and Ubuntu Dialog Corpus, we show that our VHCR successfully utilizes latent variables and outperforms state-of-the-art models for conversation generation. Moreover, it can perform several new utterance control tasks, thanks to its hierarchical latent structure.
Semantic Parsing With Syntax- And Table-Aware Sql Generation We present a generative model to map natural language questions into SQL queries. Existing neural network based approaches typically generate a SQL query word-by-word, however, a large portion of the generated results is incorrect or not executable due to the mismatch between question words and table contents. Our approach addresses this problem by considering the structure of table and the syntax of SQL language. The quality of the generated SQL query is significantly improved through (1) learning to replicate content from column names, cells or SQL keywords; and (2) improving the generation of WHERE clause by leveraging the column-cell relation. Experiments are conducted on WikiSQL, a recently released dataset with the largest question-SQL pairs. Our approach significantly improves the state-of-the-art execution accuracy from 69.0% to 74.4%.
Sequence-Based Structured Prediction For Semantic Parsing We propose an approach for semantic parsing that uses a recurrent neural network to map a natural language question into a logical form representation of a KB query. Building on recent work by (Wang et al., 2015), the interpretable logical forms, which are structured objects obeying certain constraints, are enumerated by an underlying grammar and are paired with their canonical realizations. In order to use sequence prediction, we need to sequentialize these logical forms. We compare three sequentializations: a direct linearization of the logical form, a linearization of the associated canonical realization, and a sequence consisting of derivation steps relative to the underlying grammar. We also show how grammatical constraints on the derivation sequence can easily be integrated inside the RNNbased sequential predictor. Our experiments show important improvements over previous results for the same dataset, and also demonstrate the advantage of incorporating the grammatical constraints.
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
A survey of socially interactive robots This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
Energy Efficiency Resource Allocation For D2d Communication Network Based On Relay Selection In order to solve the problem of spectrum resource shortage and energy consumption, we put forward a new model that combines with D2D communication and energy harvesting technology: energy harvesting-aided D2D communication network under the cognitive radio (EHA-CRD), where the D2D users harvest energy from the base station and the D2D source communicate with D2D destination by D2D relays. Our goals are to investigate the maximization energy efficiency (EE) of the network by joint time allocation and relay selection while taking into the constraints of the signal-to-noise ratio of D2D and the rates of the Cellular users. During this process, the energy collection time and communication time are randomly allocated. The maximization problem of EE can be divided into two sub-problems: (1) relay selection problem; (2) time optimization problem. For the first sub-problem, we propose a weighted sum maximum algorithm to select the best relay. For the last sub-problem, the EE maximization problem is non-convex problem with time. Thus, by using fractional programming theory, we transform it into a standard convex optimization problem, and we propose the optimization iterative algorithm to solve the convex optimization problem for obtaining the optimal solution. And, the simulation results show that the proposed relay selection algorithm and time optimization algorithm are significantly improved compared with the existing algorithms.
The contourlet transform: an efficient directional multiresolution image representation. The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications. Index Terms-Contourlets, contours, filter banks, geometric image processing, multidirection, multiresolution, sparse representation, wavelets.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
An Automatic Screening Approach for Obstructive Sleep Apnea Diagnosis Based on Single-Lead Electrocardiogram Traditional approaches for obstructive sleep apnea (OSA) diagnosis are apt to using multiple channels of physiological signals to detect apnea events by dividing the signals into equal-length segments, which may lead to incorrect apnea event detection and weaken the performance of OSA diagnosis. This paper proposes an automatic-segmentation-based screening approach with the single channel of Electrocardiogram (ECG) signal for OSA subject diagnosis, and the main work of the proposed approach lies in three aspects: (i) an automatic signal segmentation algorithm is adopted for signal segmentation instead of the equal-length segmentation rule; (ii) a local median filter is improved for reduction of the unexpected RR intervals before signal segmentation; (iii) the designed OSA severity index and additional admission information of OSA suspects are plugged into support vector machine (SVM) for OSA subject diagnosis. A real clinical example from PhysioNet database is provided to validate the proposed approach and an average accuracy of 97.41% for subject diagnosis is obtained which demonstrates the effectiveness for OSA diagnosis.
Neural network adaptive tracking control for a class of uncertain switched nonlinear systems. •Study the method of the tracking control of the switched uncertain nonlinear systems under arbitrary switching signal controller.•A multilayer neural network adaptive controller with multilayer weight norm adaptive estimation is been designed.•The adaptive law is expand from calculation the second layer weight of neural network to both of the two layers weight.•The controller proposed improve the tracking error performance of the closed-loop system greatly.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
BOOST: Medical Image Steganography Using Nuclear Spin Generator In this study, we present a medical image stego hiding scheme using a nuclear spin generator system. Detailed theoretical and experimental analysis is provided on the proposed algorithm using histogram analysis, peak signal-to-noise ratio, key space calculation, and statistical package analysis. The provided results show good performance of the brand new medical image steganographic scheme.
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
Genetic Optimization Of Radial Basis Probabilistic Neural Networks This paper discusses using genetic algorithms (CA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed CA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed CA is also suitable for optimizing the radial basis function neural networks (RBFNN).
Current status and key issues in image steganography: A survey. Steganography and steganalysis are the prominent research fields in information hiding paradigm. Steganography is the science of invisible communication while steganalysis is the detection of steganography. Steganography means “covered writing” that hides the existence of the message itself. Digital steganography provides potential for private and secure communication that has become the necessity of most of the applications in today’s world. Various multimedia carriers such as audio, text, video, image can act as cover media to carry secret information. In this paper, we have focused only on image steganography. This article provides a review of fundamental concepts, evaluation measures and security aspects of steganography system, various spatial and transform domain embedding schemes. In addition, image quality metrics that can be used for evaluation of stego images and cover selection measures that provide additional security to embedding scheme are also highlighted. Current research trends and directions to improve on existing methods are suggested.
Hybrid local and global descriptor enhanced with colour information. Feature extraction is one of the most important steps in computer vision tasks such as object recognition, image retrieval and image classification. It describes an image by a set of descriptors where the best one gives a high quality description and a low computation. In this study, the authors propose a novel descriptor called histogram of local and global features using speeded up robust featur...
Secure visual cryptography for medical image using modified cuckoo search. Optimal secure visual cryptography for brain MRI medical image is proposed in this paper. Initially, the brain MRI images are selected and then discrete wavelet transform is applied to the brain MRI image for partitioning the image into blocks. Then Gaussian based cuckoo search algorithm is utilized to select the optimal position for every block. Next the proposed technique creates the dual shares from the secret image. Then the secret shares are embedded in the corresponding positions of the blocks. After embedding, the extraction operation is carried out. Here visual cryptographic design is used for the purpose of image authentication and verification. The extracted secret image has dual shares, based on that the receiver views the input image. The authentication and verification of medical image are assisted with the help of target database. All the secret images are registered previously in the target database. The performance of the proposed method is estimated by Peak Signal to Noise Ratio (PSNR), Mean square error (MSE) and normalized correlation. The implementation is done by MATLAB platform.
Digital watermarking techniques for image security: a review Multimedia technology usages is increasing day by day and to provide authorized data and protecting the secret information from unauthorized use is highly difficult and involves a complex process. By using the watermarking technique, only authorized user can use the data. Digital watermarking is a widely used technology for the protection of digital data. Digital watermarking deals with the embedding of secret data into actual information. Digital watermarking techniques are classified into three major categories, and they were based on domain, type of document (text, image, music or video) and human perception. Performance of the watermarked images is analysed using Peak signal to noise ratio, mean square error and bit error rate. Watermarking of images has been researched profoundly for its specialized and modern achievability in all media applications such as copyrights protection, medical reports (MRI scan and X-ray), annotation and privacy control. This paper reviews the watermarking technique and its merits and demerits.
A New Efficient Medical Image Cipher Based On Hybrid Chaotic Map And Dna Code In this paper, we propose a novel medical image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and a new hybrid chaotic map. Our study uses DNA sequences and operations and the chaotic hybrid map to strengthen the cryptosystem. The significant advantages of this approach consist in improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext, has a large key space and a high key sensitivity, and is able to resist against the statistical and exhaustive attacks.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Assist-As-Needed Training Paradigms For Robotic Rehabilitation Of Spinal Cord Injuries This paper introduces a new "assist-as-needed" (AAN) training paradigm for rehabilitation of spinal cord injuries via robotic training devices. In the pilot study reported in this paper, nine female adult Swiss-Webster mice were divided into three groups, each experiencing a different robotic training control strategy: a fixed training trajectory (Fixed Group, A), an AAN training method without interlimb coordination (Band Group, B), and an AAN training method with bilateral hindlimb coordination (Window Group, C). Fourteen days after complete transection at the mid-thoracic level, the mice were robotically trained to step in the presence of an acutely administered serotonin agonist, quipazine, for a period of six weeks. The mice that received AAN training (Groups B and C) show higher levels of recovery than Group A mice, as measured by the number, consistency, and periodicity of steps realized during testing sessions. Group C displays a higher incidence of alternating stepping than Group B. These results indicate that this training approach may be more effective than fixed trajectory paradigms in promoting robust post-injury stepping behavior. Furthermore, the constraint of interlimb coordination appears to be an important contribution to successful training.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
Modified cuckoo search algorithm to solve economic power dispatch optimization problems A modified cuckoo search (CS) algorithm is proposed to solve economic dispatch (ED) problems that have nonconvex, non-continuous or non-linear solution spaces considering valve-point effects, prohibited operating zones, transmission losses and ramp rate limits. Comparing with the traditional cuckoo search algorithm, we propose a self-adaptive step size and some neighbor-study strategies to enhance...
Surrogate-assisted particle swarm optimization algorithm with Pareto active learning for expensive multi-objective optimization For multi-objective optimization problems, particle swarm optimization ( PSO ) algorithm generally needs a large number of fitness evaluations to obtain the Pareto optimal solutions. However, it will become substantially time-consuming when handling computationally expensive fitness functions. In order to save the computational cost, a surrogate-assisted PSO with Pareto active learning is proposed. In real physical space ( the objective functions are computationally expensive ), PSO is used as an optimizer, and its optimization results are used to construct the surrogate models. In virtual space, objective functions are replaced by the cheaper surrogate models, PSO is viewed as a sampler to produce the candidate solutions. To enhance the quality of candidate solutions, a hybrid mutation sampling method based on the simulated evolution is proposed, which combines the advantage of fast convergence of PSO and implements mutation to increase diversity. Furthermore, ε -Pareto active learning ( ε -PAL ) method is employed to pre-select candidate solutions to guide PSO in the real physical space. However, little work has considered the method of determining parameter ε. Therefore, a greedy search method is presented to determine the value of where the number of active sampling is employed as the evaluation criteria of classification cost. Experimental studies involving application on a number of benchmark test problems and parameter determination for multi-input multi-output least squares support vector machines ( MLSSVM ) are given, in which the results demonstrate promising performance of the proposed algorithm compared with other representative multi-objective particle swarm optimization ( MOPSO ) algorithms.
Multiobjective Optimization Models for Locating Vehicle Inspection Stations Subject to Stochastic Demand, Varying Velocity and Regional Constraints Deciding an optimal location of a transportation facility and automotive service enterprise is an interesting and important issue in the area of facility location allocation (FLA). In practice, some factors, i.e., customer demands, allocations, and locations of customers and facilities, are changing, and thus, it features with uncertainty. To account for this uncertainty, some researchers have addressed the stochastic time and cost issues of FLA. A new FLA research issue arises when decision makers want to minimize the transportation time of customers and their transportation cost while ensuring customers to arrive at their desired destination within some specific time and cost. By taking the vehicle inspection station as a typical automotive service enterprise example, this paper presents a novel stochastic multiobjective optimization to address it. This work builds two practical stochastic multiobjective programs subject to stochastic demand, varying velocity, and regional constraints. A hybrid intelligent algorithm integrating stochastic simulation and multiobjective teaching-learning-based optimization algorithm is proposed to solve the proposed programs. This approach is applied to a real-world location problem of a vehicle inspection station in Fushun, China. The results show that this is able to produce satisfactory Pareto solutions for an actual vehicle inspection station location problem.
Semi-supervised Stacked Label Consistent Autoencoder for Reconstruction and Analysis of Biomedical Signals. Objective: An autoencoder-based framework that simultaneously reconstruct and classify biomedical signals is proposed. Previous work has treated reconstruction and classification as separate problems. This is the first study that proposes a combined framework to address the issue in a holistic fashion. Methods: For telemonitoring purposes, reconstruction techniques of biomedical signals are largel...
Parallel planning: a new motion planning framework for autonomous driving Motion planning is one of the most significant technologies for autonomous driving. To make motion planning models able to learn from the environment and to deal with emergency situations, a new motion planning framework called as “parallel planning” is proposed in this paper. In order to generate sufficient and various training samples, artificial traffic scenes are firstly constructed based on t...
Multi-objective Infill Criterion Driven Gaussian Process Assisted Particle Swarm Optimization of High-dimensional Expensive Problems Model management plays an essential role in surrogate-assisted evolutionary optimization of expensive problems, since the strategy for selecting individuals for fitness evaluation using the real objective function has substantial influences on the final performance. Among many others, infill criterion driven Gaussian process (GP)-assisted evolutionary algorithms have been demonstrated competitive for optimization of problems with up to 50 decision variables. In this paper, a multiobjective infill criterion (MIC) that considers the approximated fitness and the approximation uncertainty as two objectives is proposed for a GP-assisted social learning particle swarm optimization algorithm. The MIC uses nondominated sorting for model management, thereby avoiding combining the approximated fitness and the approximation uncertainty into a scalar function, which is shown to be particularly important for high-dimensional problems, where the estimated uncertainty becomes less reliable. Empirical studies on 50-D and 100-D benchmark problems and a synthetic problem constructed from four real-world optimization problems demonstrate that the proposed MIC is more effective than existing scalar infill criteria for GP-assisted optimization given a limited computational budget.
Neural Architecture Transfer Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture ...
A Supervised Learning and Control Method to Improve Particle Swarm Optimization Algorithms. This paper presents an adaptive particle swarm optimization with supervised learning and control (APSO-SLC) for the parameter settings and diversity maintenance of particle swarm optimization (PSO) to adaptively choose parameters, while improving its exploration competence. Although PSO is a powerful optimization method, it faces such issues as difficult parameter setting and premature convergence. Inspired by supervised learning and predictive control strategies from machine learning and control fields, we propose APSO-SLC that employs several strategies to address these issues. First, we treat PSO with its optimization problem as a system to be controlled and model it as a dynamic quadratic programming model with box constraints. Its parameters are estimated by the recursive least squares with a dynamic forgetting factor to enhance better parameter setting and weaken worse ones. Its optimal parameters are calculated by this model to feed back to PSO. Second, a progress vector is proposed to monitor the progress rate for judging whether premature convergence happens. By studying the reason of premature convergence, this work proposes the strategies of back diffusion and new attractor learning to extend swam diversity, and speed up the convergence. Experiments are performed on many benchmark functions to compare APSO-SLC with the state-of-the-art PSOs. The results show that it is simple to program and understand, and can provide excellent and consistent performance.
Firefly algorithm, stochastic test functions and design optimisation Modern optimisation algorithms are often metaheuristic, and they are very promising in solving NP-hard optimisation problems. In this paper, we show how to use the recently developed firefly algorithm to solve non-linear design problems. For the standard pressure vessel design optimisation, the optimal solution found by FA is far better than the best solution obtained previously in the literature. In addition, we also propose a few new test functions with either singularity or stochastic components but with known global optimality and thus they can be used to validate new optimisation algorithms. Possible topics for further research are also discussed.
Anomaly detection: A survey Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.
Picbreeder: evolving pictures collaboratively online Picbreeder is an online service that allows users to collaboratively evolve images. Like in other Interactive Evolutionary Computation (IEC) programs, users evolve images on Picbreeder by selecting ones that appeal to them to produce a new generation. However, Picbreeder also offers an online community in which to share these images, and most importantly, the ability to continue evolving others' images. Through this process of branching from other images, and through continually increasing image complexity made possible by the NeuroEvolution of Augmenting Topologies (NEAT) algorithm, evolved images proliferate unlike in any other current IEC systems. Participation requires no explicit talent from the users, thereby opening Picbreeder to the entire Internet community. This paper details how Picbreeder encourages innovation, featuring images that were collaboratively evolved.
A lightweight soft exosuit for gait assistance In this paper we present a soft lower-extremity robotic exosuit intended to augment normal muscle function in healthy individuals. Compared to previous exoskeletons, the device is ultra-lightweight, resulting in low mechanical impedance and inertia. The exosuit has custom McKibben style pneumatic actuators that can assist the hip, knee and ankle. The actuators attach to the exosuit through a network of soft, inextensible webbing triangulated to attachment points utilizing a novel approach we call the virtual anchor technique. This approach is designed to transfer forces to locations on the body that can best accept load. Pneumatic actuation was chosen for this initial prototype because the McKibben actuators are soft and can be easily driven by an off-board compressor. The exosuit itself (human interface and actuators) had a mass of 3500 g and with peripherals (excluding air supply) is 7144 g. In order to examine the exosuit's performance, a pilot study with one subject was performed which investigated the effect of the ankle plantar-flexion timing on the wearer's hip, knee and ankle joint kinematics and metabolic power when walking. Wearing the suit in a passive unpowered mode had little effect on hip, knee and ankle joint kinematics as compared to baseline walking when not wearing the suit. Engaging the actuators at the ankles at 30% of the gait cycle for 250 ms altered joint kinematics the least and also minimized metabolic power. The subject's average metabolic power was 386.7 W, almost identical to the average power when wearing no suit (381.8 W), and substantially less than walking with the unpowered suit (430.6 W). This preliminary work demonstrates that the exosuit can comfortably transmit joint torques to the user while not restricting mobility and that with further optimization, has the potential to reduce the wearer's metabolic cost during walking.
An improved E-DRM scheme for mobile environments. With the rapid development of information science and network technology, Internet has become an important platform for the dissemination of digital content, which can be easily copied and distributed through the Internet. Although convenience is increased, it causes significant damage to authors of digital content. Digital rights management system (DRM system) is an access control system that is designed to protect digital content and ensure illegal users from maliciously spreading digital content. Enterprise Digital Rights Management system (E-DRM system) is a DRM system that prevents unauthorized users from stealing the enterprise's confidential data. User authentication is the most important method to ensure digital rights management. In order to verify the validity of user, the biometrics-based authentication protocol is widely used due to the biological characteristics of each user are unique. By using biometric identification, it can ensure the correctness of user identity. In addition, due to the popularity of mobile device and Internet, user can access digital content and network information at anytime and anywhere. Recently, Mishra et al. proposed an anonymous and secure biometric-based enterprise digital rights management system for mobile environment. Although biometrics-based authentication is used to prevent users from being forged, the anonymity of users and the preservation of digital content are not ensured in their proposed system. Therefore, in this paper, we will propose a more efficient and secure biometric-based enterprise digital rights management system with user anonymity for mobile environments.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.102167
0.102167
0.1
0.1
0.1
0.1
0.051667
0.026236
0.000117
0
0
0
0
0
Enabling Live Video Analytics with a Scalable and Privacy-Aware Framework. We show how to build the components of a privacy-aware, live video analytics ecosystem from the bottom up, starting with OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy. Integrating OpenFace with interframe tracking, we build RTFace, a mechanism for denaturing video streams that selectively blurs faces according to specified policies at full frame rates. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions. Finally, we present a scalable, privacy-aware architecture for large camera networks using RTFace and show how it can be an enabler for a vibrant ecosystem and marketplace of privacy-aware video streams and analytics services.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
DAAC: Digital Asset Access Control in a Unified Blockchain Based E-Health System The use of the Internet of Things and modern technologies has boosted the expansion of e-health solutions significantly and allowed access to better health services and remote monitoring of patients. Every service provider usually implements its information system to manage and access patient data for its unique purpose. Hence, the interoperability among independent e-health service providers is still a major challenge. From the structure of stored data to its large volume, the design of each such big data system varies, hence the cooperation among different e-health systems is almost impossible. In addition to this, the security and privacy of patient information is a challenging task. Building a unified solution for all creates significant business and economic issues. In this article, we present a solution to migrate existing e-health systems to a unified Blockchain-based model, where access to large scale medical data of patients can be achieved seamlessly by any service provider. A core blockchain network connects individual & independent e-health systems without requiring them to modify their internal processes. Access to patient data in the form of digital assets stored in off-chain storage is controlled through patient-centric channels and policy transactions. Through emulation, we show that the proposed solution can interconnect different e-health systems efficiently.
A Survey Of Security Threats And Defense On Blockchain Blockchain provides a trusted environment for storing information and propagating transactions. Owing to the distributed property and integrity, blockchain has been employed in various domains. However, lots of studies prove that the security mechanism of blockchain exposes its vulnerability especially when the blockchain suffers attacks. This work provides a systematic summary of the security threats and countermeasures on blockchain. We first review the working procedure and its implementation techniques. We then summarize basic security properties of blockchain. From the view of the blockchain's architecture, we describe security threats of blockchain, including weak anonymity, vulnerability of P2P network, consensus mechanism, incentive mechanism and smart contract. We then describe the related attacks and summarize the current representative countermeasures which improve anonymity and robustness against security threats respectively. Finally, we also put forward future research directions on consensus, incentive mechanisms, privacy preservation and encryption algorithm to further enhance security and privacy of the blockchain-based multimedia.
Security and blockchain convergence with Internet of Multimedia Things: Current trends, research challenges and future directions The Internet of Multimedia Things (IoMT) orchestration enables the integration of systems, software, cloud, and smart sensors into a single platform. The IoMT deals with scalar as well as multimedia data. In these networks, sensor-embedded devices and their data face numerous challenges when it comes to security. In this paper, a comprehensive review of the existing literature for IoMT is presented in the context of security and blockchain. The latest literature on all three aspects of security, i.e., authentication, privacy, and trust is provided to explore the challenges experienced by multimedia data. The convergence of blockchain and IoMT along with multimedia-enabled blockchain platforms are discussed for emerging applications. To highlight the significance of this survey, large-scale commercial projects focused on security and blockchain for multimedia applications are reviewed. The shortcomings of these projects are explored and suggestions for further improvement are provided. Based on the aforementioned discussion, we present our own case study for healthcare industry: a theoretical framework having security and blockchain as key enablers. The case study reflects the importance of security and blockchain in multimedia applications of healthcare sector. Finally, we discuss the convergence of emerging technologies with security, blockchain and IoMT to visualize the future of tomorrow's applications.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Vision meets robotics: The KITTI dataset We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Adapting visual category models to new domains Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Beamforming for MISO Interference Channels with QoS and RF Energy Transfer We consider a multiuser multiple-input single-output interference channel where the receivers are characterized by both quality-of-service (QoS) and radio-frequency (RF) energy harvesting (EH) constraints. We consider the power splitting RF-EH technique where each receiver divides the received signal into two parts a) for information decoding and b) for battery charging. The minimum required power that supports both the QoS and the RF-EH constraints is formulated as an optimization problem that incorporates the transmitted power and the beamforming design at each transmitter as well as the power splitting ratio at each receiver. We consider both the cases of fixed beamforming and when the beamforming design is incorporated into the optimization problem. For fixed beamforming we study three standard beamforming schemes, the zero-forcing (ZF), the regularized zero-forcing (RZF) and the maximum ratio transmission (MRT); a hybrid scheme, MRT-ZF, comprised of a linear combination of MRT and ZF beamforming is also examined. The optimal solution for ZF beamforming is derived in closed-form, while optimization algorithms based on second-order cone programming are developed for MRT, RZF and MRT-ZF beamforming to solve the problem. In addition, the joint-optimization of beamforming and power allocation is studied using semidefinite programming (SDP) with the aid of rank relaxation.
Multi-stream CNN: Learning representations based on human-related regions for action recognition. •Presenting a multi-stream CNN architecture to incorporate multiple complementary features trained in appearance and motion networks.•Demonstrating that using full-frame, human body, and motion-salient body part regions together is effective to improve recognition performance.•Proposing methods to detect the actor and motion-salient body part precisely.•Verifying that high-quality flow is critically important to learn accurate video representations for action recognition.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Periodic-CRN: A Convolutional Recurrent Model for Crowd Density Prediction with Recurring Periodic Patterns.
Forecasting holiday daily tourist flow based on seasonal support vector regression with adaptive genetic algorithm. •The model of support vector regression with adaptive genetic algorithm and the seasonal mechanism is proposed.•Parameters selection and seasonal adjustment should be carefully selected.•We focus on latest and representative holiday daily data in China.•Two experiments are used to prove the effect of the model.•The AGASSVR is superior to AGA-SVR and BPNN.
Regression conformal prediction with random forests Regression conformal prediction produces prediction intervals that are valid, i.e., the probability of excluding the correct target value is bounded by a predefined confidence level. The most important criterion when comparing conformal regressors is efficiency; the prediction intervals should be as tight (informative) as possible. In this study, the use of random forests as the underlying model for regression conformal prediction is investigated and compared to existing state-of-the-art techniques, which are based on neural networks and k-nearest neighbors. In addition to their robust predictive performance, random forests allow for determining the size of the prediction intervals by using out-of-bag estimates instead of requiring a separate calibration set. An extensive empirical investigation, using 33 publicly available data sets, was undertaken to compare the use of random forests to existing state-of-the-art conformal predictors. The results show that the suggested approach, on almost all confidence levels and using both standard and normalized nonconformity functions, produced significantly more efficient conformal predictors than the existing alternatives.
Learning to Predict Bus Arrival Time From Heterogeneous Measurements via Recurrent Neural Network Bus arrival time prediction intends to improve the level of the services provided by transportation agencies. Intuitively, many stochastic factors affect the predictability of the arrival time, e.g., weather and local events. Moreover, the arrival time prediction for a current station is closely correlated with that of multiple passed stations. Motivated by the observations above, this paper propo...
Hybrid Spatio-Temporal Graph Convolutional Network: Improving Traffic Prediction with Navigation Data Traffic forecasting has recently attracted increasing interest due to the popularity of online navigation services, ridesharing and smart city projects. Owing to the non-stationary nature of road traffic, forecasting accuracy is fundamentally limited by the lack of contextual information. To address this issue, we propose the Hybrid Spatio-Temporal Graph Convolutional Network (H-STGCN), which is able to "deduce" future travel time by exploiting the data of upcoming traffic volume. Specifically, we propose an algorithm to acquire the upcoming traffic volume from an online navigation engine. Taking advantage of the piecewise-linear flow-density relationship, a novel transformer structure converts the upcoming volume into its equivalent in travel time. We combine this signal with the commonly-utilized travel-time signal, and then apply graph convolution to capture the spatial dependency. Particularly, we construct a compound adjacency matrix which reflects the innate traffic proximity. We conduct extensive experiments on real-world datasets. The results show that H-STGCN remarkably outperforms state-of-the-art methods in various metrics, especially for the prediction of non-recurring congestion.
Long-Term Traffic Speed Prediction Based on Multiscale Spatio-Temporal Feature Learning Network Speed plays a significant role in evaluating the evolution of traffic status, and predicting speed is one of the fundamental tasks for the intelligent transportation system. There exists a large number of works on speed forecast; however, the problem of long-term prediction for the next day is still not well addressed. In this paper, we propose a multiscale spatio-temporal feature learning network (MSTFLN) as the model to handle the challenging task of long-term traffic speed prediction for elevated highways. Raw traffic speed data collected from loop detectors every 5 min are transformed into spatial-temporal matrices; each matrix represents the one-day speed information, rows of the matrix indicate the numbers of loop detectors, and time intervals are denoted by columns. To predict the traffic speed of a certain day, nine speed matrices of three historical days with three different time scales are served as the input of MSTFLN. The proposed MSTFLN model consists of convolutional long short-term memories and convolutional neural networks. Experiments are evaluated using the data of three main elevated highways in Shanghai, China. The presented results demonstrate that our approach outperforms the state-of-the-art work and it can effectively predict the long-term speed information.
Estimation of missing values in heterogeneous traffic data: Application of multimodal deep learning model With the development of sensing technology, a large amount of heterogeneous traffic data can be collected. However, the raw data often contain corrupted or missing values, which need to be imputed to aid traffic condition monitoring and the assessment of the system performance. Several existing studies have reported imputation models used to impute the missing values, and most of these models aimed to capture the spatial or temporal dependencies. However, the dependencies of the heterogeneous data were ignored. To this end, we propose a multimodal deep learning model to enable heterogeneous traffic data imputation. The model involves the use of two parallel stacked autoencoders that can simultaneously consider the spatial and temporal dependencies. In addition, a latent feature fusion layer is developed to capture the dependencies of the heterogeneous traffic data. To train the proposed imputation model, a hierarchical training method is introduced. Using a real world dataset, the performance of the proposed model is evaluated and compared with that of several widely used temporal imputation models, spatial imputation models, and spatial–temporal imputation models. The experimental and evaluation results indicate that the values of the evaluation criteria of the proposed model are smaller, indicating a better performance. The results also show that the proposed model can accurately impute the continuously missing data. Furthermore, the sensitivity of the parameters used in the proposed deep multimodal deep learning model is investigated. This study clearly demonstrates the effectiveness of deep learning for heterogeneous traffic data synthesis and missing data imputation. The dependencies of the heterogeneous traffic data should be considered in future studies to improve the performance of the imputation model.
Origin-Destination Matrix Prediction via Graph Convolution: a New Perspective of Passenger Demand Modeling Ride-hailing applications are becoming more and more popular for providing drivers and passengers with convenient ride services, especially in metropolises like Beijing or New York. To obtain the passengers' mobility patterns, the online platforms of ride services need to predict the number of passenger demands from one region to another in advance. We formulate this problem as an Origin-Destination Matrix Prediction (ODMP) problem. Though this problem is essential to large-scale providers of ride services for helping them make decisions and some providers have already put it forward in public, existing studies have not solved this problem well. One of the main reasons is that the ODMP problem is more challenging than the common demand prediction. Besides the number of demands in a region, it also requires the model to predict the destinations of them. In addition, data sparsity is a severe issue. To solve the problem effectively, we propose a unified model, Grid-Embedding based Multi-task Learning (GEML) which consists of two components focusing on spatial and temporal information respectively. The Grid-Embedding part is designed to model the spatial mobility patterns of passengers and neighboring relationships of different areas, the pre-weighted aggregator of which aims to sense the sparsity and range of data. The Multi-task Learning framework focuses on modeling temporal attributes and capturing several objectives of the ODMP problem. The evaluation of our model is conducted on real operational datasets from UCAR and Didi. The experimental results demonstrate the superiority of our GEML against the state-of-the-art approaches.
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Untangling Blockchain: A Data Processing View of Blockchain Systems. Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core ...
Distributed wireless communication system: a new architecture for future public wireless access The distributed wireless communication system (DWCS) is a new architecture for a wireless access system with distributed antennas, distributed processors, and distributed control. With distributed antennas, the system capacity can be expanded through dense frequency reuse, and the transmission power can be greatly decreased. With distributed processors control, the system works like a software or network radio, so different standards can coexist, and the system capacity can be increased by coprocessing of signals to and from multiple antennas.
Deep Learning Face Attributes in the Wild. Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.
SQLNet: Generating Structured Queries From Natural Language Without Reinforcement Learning. Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequence-to-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the order-matters problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph, so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9% to 13% on the WikiSQL task.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.071467
0.073333
0.073333
0.073333
0.073333
0.073333
0.066667
0.04
0.001726
0
0
0
0
0
Distributed data mining based on deep neural network for wireless sensor network AbstractAs the sample data of wireless sensor network (WSN) has increased rapidly with more andmore sensors, a centralized data mining solution in a fusion center has encountered the challenges of reducing the fusion center's calculating load and saving the WSN's transmitting power consumption. Rising to these challenges, this paper proposes a distributed data mining method based on deep neural network (DNN), by dividing the deep neural network into different layers and putting them into sensors. By the proposed solution, the distributed data mining calculating units in WSN share much of fusion center's calculating burden. And the power consumption of transmitting the data processed by DNN is much less than transmitting the raw data. Also, a fault detection scenario is built to verify the validity of this method. Results show that the detection rate is 99%, andWSN shares 64.06% of the data mining calculating task with 58.31% reduction of power consumption.
High delivery rate position-based routing algorithms for 3D ad hoc networks Position-based routing algorithms use the geographic position of the nodes in a network to make the forwarding decisions. Recent research in this field primarily addresses such routing algorithms in two dimensional (2D) space. However, in real applications, nodes may be distributed in three dimensional (3D) environments. In this paper, we propose several randomized position-based routing algorithms and their combination with restricted directional flooding-based algorithms for routing in 3D environments. The first group of algorithms AB3D are extensions of previous randomized routing algorithms from 2D space to 3D space. The second group ABLAR chooses m neighbors according to a space-partition heuristic and forwards the message to all these nodes. The third group T-ABLAR-T uses progress-based routing until a local minimum is reached. The algorithm then switches to ABLAR for one step after which the algorithm switches back to the progress-based algorithm again. The fourth group AB3D-ABLAR uses an algorithm from the AB3D group until a threshold is passed in terms of number of hops. The algorithm then switches to an ABLAR algorithm. The algorithms are evaluated and compared with current routing algorithms. The simulation results on unit disk graphs (UDG) show a significant improvement in delivery rate (up to 99%) and a large reduction of the traffic.
Three-Dimensional Position-Based Adaptive Real-Time Routing Protocol for wireless sensor networks Devices for wireless sensor networks (WSN) are limited by power, and thus, routing protocols should be designed with this constraint in mind. WSNs are used in three-dimensional (3D) scenarios such as the surface of sea or lands with different levels of height. This paper presents and evaluates the Three-Dimensional Position-Based Adaptive Real-Time Routing Protocol (3DPBARP) as a novel, real-time, position-based and energy-efficient routing protocol for WSNs. 3DPBARP is a lightweight protocol that reduces the number of nodes which receive the radio frequency (RF) signal using a novel parent forwarding region (PFR) algorithm. 3DPBARP as a Geographical Routing Protocol (GRP) reduces the number of forwarding nodes and thus the traffic and packet collision in the network. A series of performance evaluations through MATLAB and Omnet++ simulations show significant improvements in network performance parameters and total energy consumption over the 3D Position-Based Routing Protocol (3DPBRP) and Directed Flooding Routing Protocol (DFRP).
Novel unequal clustering routing protocol considering energy balancing based on network partition & distance for mobile education. In Wireless Sensor Networks (WSN) of mobile education (such as mobile learning), in order to keep a better and lower energy consumption, reduce the energy hole and prolong the network life cycle, we propose a novel unequal clustering routing protocol considering energy balancing based on network partition & distance (UCNPD, which means Unequal Clustering based on Network Partition & Distance) for WSN in this paper. In the design model of this protocol, we know that all the network node data reaches the base station (BS) through the nodes near the BS, and the nodes in this area will use more energy, so we define a ring area using the BS as the center to form a circle, then we partition the network area based on the distance from node to the BS. These parts of nodes are to build connection with the BS, and the others follow the optimized clustering routing service protocol which uses a timing mechanism to elect the cluster head. It reduces the energy consumption of cluster reconstruction. Furthermore, we build unequal clusters by setting different competitive radius, which is helpful for balancing the network energy consumption. For the selection of message route, we considered all the energy of cluster head, the distances to BS and the degrees of node to reduce and balance the energy consumption. Simulation results demonstrate that the protocol can efficiently decrease the speed of the nodes death, prolong the network lifetime, and balance the energy dissipation of all nodes.
Energy Aware Cluster-Based Routing in Flying Ad-Hoc Networks. Flying ad-hoc networks (FANETs) are a very vibrant research area nowadays. They have many military and civil applications. Limited battery energy and the high mobility of micro unmanned aerial vehicles (UAVs) represent their two main problems, i.e., short flight time and inefficient routing. In this paper, we try to address both of these problems by means of efficient clustering. First, we adjust the transmission power of the UAVs by anticipating their operational requirements. Optimal transmission range will have minimum packet loss ratio (PLR) and better link quality, which ultimately save the energy consumed during communication. Second, we use a variant of the K-Means Density clustering algorithm for selection of cluster heads. Optimal cluster heads enhance the cluster lifetime and reduce the routing overhead. The proposed model outperforms the state of the art artificial intelligence techniques such as Ant Colony Optimization-based clustering algorithm and Grey Wolf Optimization-based clustering algorithm. The performance of the proposed algorithm is evaluated in term of number of clusters, cluster building time, cluster lifetime and energy consumption.
Secdl: Qos-Aware Secure Deep Learning Approach For Dynamic Cluster-Based Routing In Wsn Assisted Iot In WSN-assisted IoT, energy efficiency and security which play pivotal role in Quality of Service (QoS) are still challenging due to its open and resource constrained nature. Although many research works have been held on WSN-IoT, none of them is able to provide high-level security with energy efficiency. This paper resolves this problem by designing a novel Secure Deep Learning (SecDL) approach for dynamic cluster-based WSN-IoT networks. To improve energy efficiency, the network is designed to be Bi-Concentric Hexagons along with Mobile Sink technology. Dynamic clusters are formed within Bi-Hex network and optimal cluster heads are selected by Quality Prediction Phenomenon (QP(2)) that ensure QoS and also energy efficiency. Data aggregation is enabled in each cluster and handled with a Two-way Data Elimination then Reduction scheme. A new One Time-PRESENT (OT-PRESENT) cryptography algorithm is designed to achieve high-level security for aggregated data. Then, the ciphertext is transmitted to mobile sink through optimal route to ensure high-level QoS. For optimal route selection, a novel Crossover based Fitted Deep Neural Network (Co-FitDNN) is presented. This work also concentrates on IoT-user security since the sensory data can be accessed by IoT users. This work utilizes the concept of data mining to authenticate the IoT users. All IoT users are authenticated by Apriori based Robust Multi-factor Validation algorithm which maps the ideal authentication feature set for each user. In this way, the proposed SecDL approach achieves security, QoS and energy efficiency. Finally, the network is modeled in ns-3.26 and the results show betterment in network lifetime, throughput, packet delivery ratio, delay and encryption time.
Geographic multipath routing based on geospatial division in duty-cycled underwater wireless sensor networks In Underwater Wireless Sensor Networks (UWSNs), the geographic routing is a preferred choice for data transmission due to the unique characteristics of underwater environment such as the three dimensional topology, the limited bandwidth and power resources. This paper focuses on underwater routing protocols in the network layer, where underwater sensor nodes can collaborate with each other to transfer data information. The three dimensional underwater network is first divided into small cube spaces, thus data packets are supposed to be collaboratively transmitted by unit of small cubes logically. By taking complex properties of underwater medium into consideration such as three dimensional topology, high propagation delay and path loss of acoustic channel, we propose two novel multi-path strategies called Greedy Geographic Forwarding based on Geospatial Division (GGFGD) and Geographic Forwarding based on Geospatial Division (GFGD). The proposed two algorithms mainly consist of two phases, choosing the next target small cube, and choosing the next hop node in the target small cube. Furthermore, all the sensor nodes in the network are duty-cycled in the MAC layer. Finally, performance analysis is derived, and simulation results illustrate the performance improvement in finding route paths, optimal length of found paths. In addition, energy consumption of route finding is reduced and propagation delay of data transmission is decreased.
Near-Optimal Velocity Control for Mobile Charging in Wireless Rechargeable Sensor Networks. Limited energy in each node is the major design constraint in wireless sensor networks (WSNs). To overcome this limit, wireless rechargeable sensor networks (WRSNs) have been proposed and studied extensively over the last few years. In a typical WRSN, batteries in sensor nodes can be replenished by a mobile charger that periodically travels along a certain trajectory in the sensing area. To maximize the charged energy in sensor nodes, one fundamental question is how to control the traveling velocity of the charger. In this paper, we first identify the optimal velocity control as a key design objective of mobile wireless charging in WRSNs. We then formulate the optimal charger velocity control problem on arbitrarily-shaped irregular trajectories in a 2D space. The problem is proved to be NP-hard, and hence a heuristic solution with a provable upper bound is developed using novel spatial and temporal discretization. We also derive the optimal velocity control for moving the charger along a linear (1D) trajectory commonly seen in many WSN applications. Extensive simulations show that the network lifetime can be extended by 2.5 with the proposed velocity control mechanisms.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading in Wireless Sensor Networks In this paper, a three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer. The framework employs distributed load balanced clustering and dual data uploading, which is referred to as LBC-DDU. The objective is to achieve good scalability, long network lifetime and low data collection latency. At the sensor layer, a distributed load balanced clustering (LBC) algorithm is proposed for sensors to self-organize themselves into clusters. In contrast to existing clustering methods, our scheme generates multiple cluster heads in each cluster to balance the work load and facilitate dual data uploading. At the cluster head layer, the inter-cluster transmission range is carefully chosen to guarantee the connectivity among the clusters. Multiple cluster heads within a cluster cooperate with each other to perform energy-saving inter-cluster communications. Through inter-cluster transmissions, cluster head information is forwarded to SenCar for its moving trajectory planning. At the mobile collector layer, SenCar is equipped with two antennas, which enables two cluster heads to simultaneously upload data to SenCar in each time by utilizing multi-user multiple-input and multiple-output (MU-MIMO) technique. The trajectory planning for SenCar is optimized to fully utilize dual data uploading capability by properly selecting polling points in each cluster. By visiting each selected polling point, SenCar can efficiently gather data from cluster heads and transport the data to the static data sink. Extensive simulations are conducted to evaluate the effectiveness of the proposed LBC-DDU scheme. The results show that when each cluster has at most two cluster heads, LBC-DDU achieves over 50 percent energy saving per node and 60 percent energy saving on cluster heads comparing with data collection through multi-hop relay to the static data sink, and 20 percent - horter data collection time compared to traditional mobile data gathering.
QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning The Internet of vehicles (IoV) is a large information interaction network that collects information on vehicles, roads and pedestrians. One of the important uses of vehicle networks is to meet the entertainment needs of driving users through communication between vehicles and roadside units (RSUs). Due to the limited storage space of RSUs, determining the content cached in each RSU is a key challenge. With the development of 5G and video editing technology, short video systems have become increasingly popular. Current widely used cache update methods, such as partial file precaching and content popularity- and user interest-based determination, are inefficient for such systems. To solve this problem, this paper proposes a QoE-driven edge caching method for the IoV based on deep reinforcement learning. First, a class-based user interest model is established. Compared with the traditional file popularity- and user interest distribution-based cache update methods, the proposed method is more suitable for systems with a large number of small files. Second, a quality of experience (QoE)-driven RSU cache model is established based on the proposed class-based user interest model. Third, a deep reinforcement learning method is designed to address the QoE-driven RSU cache update issue effectively. The experimental results verify the effectiveness of the proposed algorithm.
Stochastic Geometry for Modeling, Analysis, and Design of Multi-Tier and Cognitive Cellular Wireless Networks: A Survey. For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given...
Multi-Subaperture PGA for SAR Autofocusing For spotlight mode synthetic aperture radar (SAR) autofocusing, the traditional full-aperture phase gradient autofocus (PGA) algorithm might suffer from performance degradation in the presence of significant high-order phase error and residual range cell migration (RCM), which tend to occur when the coherent processing interval (CPI) is long. Meanwhile, PGA does not perform satisfactorily when applied directly on the stripmap data. To address these shortcomings, we present a multi-subaperture PGA algorithm, which takes advantage of the map drift (MD) technique. It smoothly incorporates the estimation of residual RCM and combines the subaperture phase error (SPE) estimated by PGA in a very precise manner. The methodology and accuracy of PGA-MD are investigated in detail. Experimental results indicate the effectiveness of PGA-MD in both the spotlight and the stripmap modes.
An Overview of Recent Advances in Event-Triggered Consensus of Multiagent Systems. Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement upon a common quantity of interest while significantly alleviating utilization of communication and computation resources. This paper aims to provide an overview of recent advances in e...
Robust PCA for Subspace Estimation in User-Centric Cell-Free Wireless Networks We consider a scalable user-centric cell-free massive MIMO network with distributed remote radio units (RUs), enabling macrodiversity and joint processing. Due to the limited uplink (UL) pilot dimension, multiuser interference in the UL pilot transmission phase makes channel estimation a non-trivial problem. We make use of two types of UL pilot signals, sounding reference signal (SRS) and demodulation reference signal (DMRS) pilots, for the estimation of the channel subspace and its instantaneous realization, respectively. The SRS pilots are transmitted over multiple time slots and resource blocks according to a Latin squares based hopping scheme, which aims at averaging out the interference of different SRS co-pilot users. We propose a robust principle component analysis approach for channel subspace estimation from the SRS signal samples, employed at the RUs for each associated user. The estimated subspace is further used at the RUs for DMRS pilot decontamination and instantaneous channel estimation. We provide numerical simulations to compare the system performance using our subspace and channel estimation scheme with the cases of ideal partial subspace/channel knowledge and pilot matching channel estimation. The results show that a system with a properly designed SRS pilot hopping scheme can closely approximate the performance of a genie-aided system.
1.072
0.066667
0.066667
0.066667
0.066667
0.066667
0.033333
0.005145
0.001481
0
0
0
0
0
Demand Response for Residential Appliances via Customer Reward Scheme. This paper proposes a reward based demand response algorithm for residential customers to shave network peaks. Customer survey information is used to calculate various criteria indices reflecting their priority and flexibility. Criteria indices and sensitivity based house ranking is used for appropriate load selection in the feeder for demand response. Customer Rewards (CR) are paid based on load shift and voltage improvement due to load adjustment. The proposed algorithm can be deployed in residential distribution networks using a two-level hierarchical control scheme. Realistic residential load model consisting of non-controllable and controllable appliances is considered in this study. The effectiveness of the proposed demand response scheme on the annual load growth of the feeder is also investigated. Simulation results show that reduced peak demand, improved network voltage performance, and customer satisfaction can be achieved.
Local Load Redistribution Attacks in Power Systems With Incomplete Network Information Power grid is one of the most critical infrastructures in a nation and could suffer a variety of cyber attacks. Recent studies have shown that an attacker can inject pre-determined false data into smart meters such that it can pass the residue test of conventional state estimator. However, the calculation of the false data vector relies on the network (topology and parameter) information of the entire grid. In practice, it is impossible for an attacker to obtain all network information of a power grid. Unfortunately, this does not make power systems immune to false data injection attacks. In this paper, we propose a local load redistribution attacking model based on incomplete network information and show that an attacker only needs to obtain the network information of the local attacking region to inject false data into smart meters in the local region without being detected by the state estimator. Simulations on the modified IEEE 14-bus system demonstrate the correctness and effectiveness of the proposed model. The results of this paper reveal the mechanism of local false data injection attacks and highlight the importance and complexity of defending power systems against false data injection attacks.
Using Battery Storage for Peak Shaving and Frequency Regulation: Joint Optimization for Superlinear Gains We consider using a battery storage system simultaneously for peak shaving and frequency regulation through a joint optimization framework, which captures battery degradation, operational constraints, and uncertainties in customer load and regulation signals. Under this framework, using real data we show the electricity bill of users can be reduced by up to 12%. Furthermore, we demonstrate that th...
Blockchain and Computational Intelligence Inspired Incentive-Compatible Demand Response in Internet of Electric Vehicles. By leveraging the charging and discharging capabilities of Internet of electric vehicles (IoEV), demand response (DR) can be implemented in smart cities to enable intelligent energy scheduling and trading. However, IoEV-based DR confronts many challenges, such as a lack of incentive mechanism, privacy leakage, and security threats. This motivates us to develop a distributed, privacy-preserved, and...
Deep Reinforcement Learning-based Capacity Scheduling for PV-Battery Storage System Investor-owned photovoltaic-battery storage systems (PV-BSS) can gain revenue by providing stacked services, including PV charging and frequency regulation, and by performing energy arbitrage. Capacity scheduling (CS) is a crucial component of PV-BSS energy management, aiming to ensure the secure and economic operation of the PV-BSS. This article proposes a Proximal Policy Optimization (PPO)-based...
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
JPEG Error Analysis and Its Applications to Digital Image Forensics JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
The FERET Evaluation Methodology for Face-Recognition Algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.
Neural fitted q iteration – first experiences with a data efficient neural reinforcement learning method This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation. •The proposed watermarking scheme utilized improved discrete wavelet transformation (IDWT) to retrieve the invariant wavelet domain.•The entropy mechanism is used to identify the suitable region for insertion of watermark. This will improve the imperceptibility and robustness of the watermarking procedure.•The scaling factors such as PSNR and NC are considered for evaluation of the proposed method and the Particle Swarm Optimization is employed to optimize the scaling factors.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0