Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Person re-identification by descriptive and discriminative classification Person re-identification, i.e., recognizing a single person across spatially disjoint cameras, is an important task in visual surveillance. Existing approaches either try to find a suitable description of the appearance or learn a discriminative model. Since these different representational strategies capture a large extent of complementary information we propose to combine both approaches. First, given a specific query, we rank all samples according to a feature-based similarity, where appearance is modeled by a set of region covariance descriptors. Next, a discriminative model is learned using boosting for feature selection, which provides a more specific classifier. The proposed approach is demonstrated on two datasets, where we show that the combination of a generic descriptive statistical model and a discriminatively learned feature-based model attains considerably better results than the individual models alone. In addition, we give a comparison to the state-of-the-art on a publicly available benchmark dataset.
Patch-Based Discriminative Feature Learning For Unsupervised Person Re-Identification While discriminative local features have been shown effective in solving the person re-identification problem, they are limited to be trained on fully pairwise labelled data which is expensive to obtain. In this work, we overcome this problem by proposing a patch-based unsupervised learning framework in order to learn discriminative feature from patches instead of the whole images. The patch-based learning leverages similarity between patches to learn a discriminative model. Specifically, we develop a PatchNet to select patches from the feature map and learn discriminative features for these patches. To provide effective guidance for the PatchNet to learn discriminative patch feature on unlabeled datasets, we propose an unsupervised patch-based discriminative feature learning loss. In addition, we design an image-level feature learning loss to leverage all the patch features of the same image to serve as an image-level guidance for the PatchNet. Extensive experiments validate the superiority of our method for unsupervised person re-id. Our code is available at https://github.com/QizeYang/PAUL.
Omni-Scale Feature Learning For Person Re-Identification As an instance-level recognition problem, person reidentification (ReID) relies on discriminative features, which not only capture different spatial scales but also encapsulate an arbitrary combination of multiple scales. We callse features of both homogeneous and heterogeneous scales omni-scale features. In this paper, a novel deep ReID CNN is designed, termed Omni-Scale Network (OSNet), for omni-scale feature learning. This is achieved by designing a residual block composed of multiple convolutional feature streams, each detecting features at a certain scale. Importantly, a novel unified aggregation gate is introduced to dynamically fuse multiscale features with input-dependent channel-wise weights. To efficiently learn spatial-channel correlations and avoid overfitting, the building block uses both pointwise and depthwise convolutions. By stacking such blocks layer-by-layer, our OSNet is extremely lightweight and can be trained from scratch on existing ReID benchmarks. Despite its small model size, our OSNet achieves state-of-the-art performance on six person-ReID datasets. Code and models are available at: https://github.com/ KaiyangZhou/deep-person-reid.
End-to-End Deep Learning for Person Search. Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks—pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a largescale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes1. We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.
Locally Aligned Feature Transforms across Views In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.
FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification. Person re-identification (reID) is an important task that requires to retrieve a person's images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person's generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN.(double dagger double dagger)
Instance Normalization: The Missing Ingredient for Fast Stylization. It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096.
A Survey on Transfer Learning A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.
Sequence to Sequence Learning with Neural Networks. Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. A literature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeabil- ity, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and valid- ity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the question- naires by adding new concepts, and to conduct further vali- dations where it appears necessary.
Trajectory control of biomimetic robots for demonstrating human arm movements This study describes the trajectory control of biomimetic robots by developing human arm trajectory planning. First, the minimum jerk trajectory of the joint angles is produced analytically, and the trajectory of the elbow joint angle is modified by a time-adjustment of the joint motion of the elbow relative to the shoulder. Next, experiments were conducted in which gyro sensors were utilized, and the trajectories observed were compared with those which had been produced. The results showed that the proposed trajectory control is an advantageous scheme for demonstrating human arm movements.
Large-Scale Hierarchical Text Classification with Recursively Regularized Deep Graph-CNN. Text classification to a hierarchical taxonomy of topics is a common and practical problem. Traditional approaches simply use bag-of-words and have achieved good results. However, when there are a lot of labels with different topical granularities, bag-of-words representation may not be enough. Deep learning models have been proven to be effective to automatically learn different levels of representations for image data. It is interesting to study what is the best way to represent texts. In this paper, we propose a graph-CNN based deep learning model to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph. Graph-of-words representation of texts has the advantage of capturing non-consecutive and long-distance semantics. CNN models have the advantage of learning different level of semantics. To further leverage the hierarchy of labels, we regularize the deep architecture with the dependency among labels. Our results on both RCV1 and NYTimes datasets show that we can significantly improve large-scale hierarchical text classification over traditional hierarchical text classification and existing deep models.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.026959
0.027624
0.027624
0.026662
0.02349
0.013767
0.006597
0.000183
0.000005
0
0
0
0
0
Kalman Prediction-Based Neighbor Discovery and Its Effect on Routing Protocol in Vehicular Ad Hoc Networks Efficient neighbor discovery in vehicular ad hoc networks is crucial to a number of applications such as driving safety and data transmission. The main challenge is the high mobility of vehicles. In this paper, we proposed a new algorithm for quickly discovering neighbor node in such a dynamic environment. The proposed rapid discovery algorithm is based on a novel mobility prediction model using Kalman filter theory, where each vehicular node has a prediction model to predict its own and its neighbors’ mobility. This is achieved by considering the nodes’ temporal and spatial movement features. The prediction algorithm is reinforced with threshold triggered location broadcast messages, which will update the prediction model parameters, and improve the efficiency of the neighbor discovery algorithm. Through extensive simulations, the accuracy, robustness, and efficiency properties of our proposed algorithm are demonstrated. Compared with other methods of neighbor discovery, which are frequently used in HP-AODV, ARH, and ROMSG, the proposed algorithm needs the least overheads and can reach the lowest neighbor error rate while improving the accuracy rate of neighbor discovery. In general, the comparative analysis of different neighbor discovery methods in routing protocol is obtained, which shows that the proposed solution performs better than HP-AODV, ARH, and ROMSG.
A Survey of Ant Colony Optimization Based Routing Protocols for Mobile Ad Hoc Networks. Developing highly efficient routing protocols for Mobile Ad hoc NETworks (MANETs) is a challenging task. In order to fulfill multiple routing requirements, such as low packet delay, high packet delivery rate, and effective adaptation to network topology changes with low control overhead, and so on, new ways to approximate solutions to the known NP-hard optimization problem of routing in MANETs have to be investigated. Swarm intelligence (SI)-inspired algorithms have attracted a lot of attention, because they can offer possible optimized solutions ensuring high robustness, flexibility, and low cost. Moreover, they can solve large-scale sophisticated problems without a centralized control entity. A successful example in the SI field is the ant colony optimization (ACO) meta-heuristic. It presents a common framework for approximating solutions to NP-hard optimization problems. ACO has been successfully applied to balance the various routing related requirements in dynamic MANETs. This paper presents a comprehensive survey and comparison of various ACO-based routing protocols in MANETs. The main contributions of this survey include: 1) introducing the ACO principles as applied in routing protocols for MANETs; 2) classifying ACO-based routing approaches reviewed in this paper into five main categories; 3) surveying and comparing the selected routing protocols from the perspective of design and simulation parameters; and 4) discussing open issues and future possible design directions of ACO-based routing protocols.
A Microbial Inspired Routing Protocol for VANETs. We present a bio-inspired unicast routing protocol for vehicular ad hoc networks which uses the cellular attractor selection mechanism to select next hops. The proposed unicast routing protocol based on attractor selecting (URAS) is an opportunistic routing protocol, which is able to change itself adaptively to the complex and dynamic environment by routing feedback packets. We further employ a mu...
Improvement of GPSR Protocol in Vehicular Ad Hoc Network. In a vehicular ad hoc network (VANET), vehicles always move in high-speed which may cause the network topology changes frequently. This is challenging for routing protocols of VANET. Greedy Perimeter Stateless Routing (GPSR) is a representative routing protocol of VANET. However, when constructs routing path, GPSR selects the next hop node which is very easily out of the communication range in greedy forwarding, and builds the path with redundancy in perimeter forwarding. To solve the above-mentioned problems, we proposed Maxduration-Minangle GPSR (MM-GPSR) routing protocol in this paper. In greedy forwarding of MM-GPSR, by defining cumulative communication duration to represent the stability of neighbor nodes, the neighbor node with the maximum cumulative communication duration will be selected as the next hop node. In perimeter forwarding of MM-GPSR when greedy forwarding fails, the concept of minimum angle is introduced as the criterion of the optimal next hop node. By taking the position of neighbor nodes into account and calculating angles formed between neighbors and the destination node, the neighbor node with minimum angle will be selected as the next hop node. By using NS-2 and VanetMobiSim, simulations demonstrate that compared with GPSR, MM-GPSR has obvious improvements in reducing the packet loss rate, decreasing the end-to-end delay and increasing the throughput, and is more suitable for VANET.
Secure Real-Time Traffic Data Aggregation With Batch Verification for Vehicular Cloud in VANETs The vehicular cloud provides many significant advantages to Vehicular ad-hoc Networks (VANETs), such as unlimited storage space, powerful computing capability and timely traffic services. Traffic data aggregation in the vehicular cloud, which can aggregate traffic data from vehicles for further processing and sharing, is very important. Incorrect traffic data feedback may affect traffic safety; therefore, the security of traffic data aggregation should be ensured. In this paper, by using the property of data recovery in the message recovery signature (MRS), we propose a secure real-time traffic data aggregation scheme for vehicular cloud in VANETs. In the proposed scheme, the validity of vehicles’ signatures are verified, and then the original traffic data is recovered from signatures. Moreover, the proposed scheme supports batch verification for multiple vehicles’ signatures. Due to advantages of the MRS, security features such as data confidentiality, privacy preservation and reply attack resistance are preserved. In addition, the comparison and simulation results indicate that the proposed scheme is superior in comparison to previous schemes with respect to the communication and computational cost.
A Survey Of Qos-Aware Routing Protocols For The Manet-Wsn Convergence Scenarios In Iot Networks Wireless Sensor Network (WSN) and Mobile Ad hoc Network (MANET) have attracted a special attention because they can serve as communication means in many areas such as healthcare, military, smart traffic and smart cities. Nowadays, as all devices can be connected to a network forming the Internet of Things (IoT), the integration of WSN, MANET and other networks into IoT is indispensable. We investigate the convergence of WSN and MANET in IoT and consider a fundamental problem, that is, how a converged (WSN-MANET) network provides quality of service (QoS) guarantees to rich multimedia applications. This is very important because the network performances of WSN and MANET are quite low, while multimedia applications always require quality of services at certain levels. In this work, we survey the QoS-guaranteed routing protocols for WSN-MANETs, that are proposed in IEEE Xplore Digital Library over the last decade. Then, basing on our findings, we suggest future open research directions.
Efficient and Secure Routing Protocol Based on Artificial Intelligence Algorithms With UAV-Assisted for Vehicular Ad Hoc Networks in Intelligent Transportation Systems Vehicular Ad hoc Networks (VANETs) that are considered as a subset of Mobile Ad hoc Networks (MANETs) can be applied in the field of transportation especially in Intelligent Transportation Systems (ITS). The routing process in these networks is a challenging task due to rapid topology changes, high vehicle mobility and frequent disconnection of links. Therefore, developing an efficient routing pro...
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Energy-Aware Task Offloading and Resource Allocation for Time-Sensitive Services in Mobile Edge Computing Systems Mobile Edge Computing (MEC) is a promising architecture to reduce the energy consumption of mobile devices and provide satisfactory quality-of-service to time-sensitive services. How to jointly optimize task offloading and resource allocation to minimize the energy consumption subject to the latency requirement remains an open problem, which motivates this paper. When the latency constraint is tak...
Symbolic model checking for real-time systems We describe finite-state programs over real-numbered time in a guarded-command language with real-valued clocks or, equivalently, as finite automata with real-valued clocks. Model checking answers the question which states of a real-time program satisfy a branching-time specification (given in an extension of CTL with clock variables). We develop an algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space. For this purpose, we introduce a μ-calculus on computation trees over real-numbered time. Unfortunately, many standard program properties, such as response for all nonzeno execution sequences (during which time diverges), cannot be characterized by fixpoints: we show that the expressiveness of the timed μ-calculus is incomparable to the expressiveness of timed CTL. Fortunately, this result does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values. All timed CTL properties of such programs are shown to be computable as finitely approximable fixpoints in a simple decidable theory.
The industrial indoor channel: large-scale and temporal fading at 900, 2400, and 5200 MHz In this paper, large-scale fading and temporal fading characteristics of the industrial radio channel at 900, 2400, and 5200 MHz are determined. In contrast to measurements performed in houses and in office buildings, few attempts have been made until now to model propagation in industrial environments. In this paper, the industrial environment is categorized into different topographies. Industrial topographies are defined separately for large-scale and temporal fading, and their definition is based upon the specific physical characteristics of the local surroundings affecting both types of fading. Large-scale fading is well expressed by a one-slope path-loss model and excellent agreement with a lognormal distribution is obtained. Temporal fading is found to be Ricean and Ricean K-factors have been determined. Ricean K-factors are found to follow a lognormal distribution.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
Survey of Fog Computing: Fundamental, Network Applications, and Research Challenges. Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog n...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.24
0.24
0.24
0.24
0.24
0.24
0.06
0
0
0
0
0
0
0
Smart home energy management system using IEEE 802.15.4 and zigbee Wireless personal area network and wireless sensor networks are rapidly gaining popularity, and the IEEE 802.15 Wireless Personal Area Working Group has defined no less than different standards so as to cater to the requirements of different applications. The ubiquitous home network has gained widespread attentions due to its seamless integration into everyday life. This innovative system transparently unifies various home appliances, smart sensors and energy technologies. The smart energy market requires two types of ZigBee networks for device control and energy management. Today, organizations use IEEE 802.15.4 and ZigBee to effectively deliver solutions for a variety of areas including consumer electronic device control, energy management and efficiency, home and commercial building automation as well as industrial plant management. We present the design of a multi-sensing, heating and airconditioning system and actuation application - the home users: a sensor network-based smart light control system for smart home and energy control production. This paper designs smart home device descriptions and standard practices for demand response and load management "Smart Energy" applications needed in a smart energy based residential or light commercial environment. The control application domains included in this initial version are sensing device control, pricing and demand response and load control applications. This paper introduces smart home interfaces and device definitions to allow interoperability among ZigBee devices produced by various manufacturers of electrical equipment, meters, and smart energy enabling products. We introduced the proposed home energy control systems design that provides intelligent services for users and we demonstrate its implementation using a real testbad.
On the History of the Minimum Spanning Tree Problem It is standard practice among authors discussing the minimum spanning tree problem to refer to the work of Kruskal(1956) and Prim (1957) as the sources of the problem and its first efficient solutions, despite the citation by both of Boruvka (1926) as a predecessor. In fact, there are several apparently independent sources and algorithmic solutions of the problem. They have appeared in Czechoslovakia, France, and Poland, going back to the beginning of this century. We shall explore and compare these works and their motivations, and relate them to the most recent advances on the minimum spanning tree problem.
A Three-Phase Search Approach for the Quadratic Minimum Spanning Tree Problem. Given an undirected graph with costs associated with each edge as well as each pair of edges, the quadratic minimum spanning tree problem (QMSTP) consists of determining a spanning tree of minimum cost. QMSTP is useful to model many real-life network design applications. We propose a three-phase search approach named TPS for solving QMSTP, which organizes the search process into three distinctive phases which are iterated: (1) a descent neighborhood search phase using two move operators to reach a local optimum from a given starting solution, (2) a local optima exploring phase to discover nearby local optima within a given regional area, and (3) a perturbation-based diversification phase to jump out of the current regional search area. TPS also introduces a pre-estimation criterion to significantly improve the efficiency of neighborhood evaluation, and develops a new swap-vertex neighborhood (as well as a swap-vertex based perturbation operator) which prove to be quite powerful for solving a series of special instances with particular structures. Computational experiments based on 7 sets of 659 popular benchmarks show that TPS produces highly competitive results compared to the best performing approaches in the literature. TPS discovers improved best known results (new upper bounds) for 33 open instances and matches the best known results for all the remaining instances. Critical elements and parameters of the TPS algorithm are analyzed to understand its behavior. HighlightsQMSTP is a general model able to formulate a number of network design problems.We propose a three phase search heuristic (TPS) for this problem.TPS is assessed on 7 groups of 659 representative benchmarks of the literature.TPS finds improved best solutions for 33 challenging instances.TPS finds all the optimal solutions for the 29 instances transformed from the QAP.
Optimising small-world properties in VANETs: Centralised and distributed overlay approaches. Advantages of bringing small-world properties in mobile ad hoc networks (MANETs) in terms of quality of service has been studied and outlined in the past years. In this work, we focus on the specific class of vehicular ad hoc networks (VANETs) and propose to un-partition such networks and improve their small-world properties. To this end, a subset of nodes, called injection points, is chosen to provide backend connectivity and compose a fully-connected overlay network. The optimisation problem we consider is to find the minimal set of injection points to constitute the overlay that will optimise the small-world properties of the resulting network, i.e., (1) maximising the clustering coefficient (CC) so that it approaches the CC of a corresponding regular graph and (2) minimising the difference between the average path length (APL) of the considered graph and the APL of corresponding random graphs. Two accurate evolutionary algorithms (namely, NSGAII and MOCHC) are used to find an upper-bound of high quality solutions to this new multi-objective optimisation problem, on realistic instances in the city-centre of Luxembourg. The obtained sets of solutions are then used to assess the performance of five novel heuristics proposed to solve the problem, i.e., two centralised and three decentralised. The results provided by these heuristics turned out to be extremely accurate with respect to the solutions found by the evolutionary algorithms. (C) 2014 Elsevier B.V. All rights reserved.
A new routing protocol for energy efficient mobile applications for ad hoc networks. •A new Energy-Aware Span Routing Protocol (EASRP) for wireless ad hoc networks is proposed.•Proposed protocol can minimize utilization of energy source by combining energy saving approaches Span and AFECA.•It uses the Remote Activated Switch and wakes up the sleeping nodes during inactive time for reduce latency problem.•The performance parameter of proposed protocol is tested under Network Simulator-2.
Effective crowdsensing and routing algorithms for next generation vehicular networks The vehicular ad hoc network (VANET) has recently emerged as a promising networking technique attracting both the vehicular manufacturing industry and the academic community. Therefore, the design of next generation VANET management schemes becomes an important issue to satisfy the new demands. However, it is difficult to adapt traditional control approaches, which have already proven reliable in ad-hoc wireless networks, directly. In this study, we focus on the development of vehicular crowdsensing and routing algorithms in VANETs. The proposed scheme, which is based on reinforcement learning and game theory, is designed as novel vertical and horizontal game models, and provides an effective dual-plane control mechanism. In a vertical game, network agent and vehicles work together toward an appropriate crowdsensing process. In a horizontal game, vehicles select their best routing route for the VANET routing. Based on the decentralized, distributed manner, our dual-plane game paradigm captures the dynamics of the VANET system. Simulations and performance analysis verify the efficiency of the proposed scheme, showing that our approach can outperform existing schemes in terms of RSU’s task success ratio, normalized routing throughput, and end-to-end packet delay.
An enhanced QoS CBT multicast routing protocol based on Genetic Algorithm in a hybrid HAP-Satellite system A QoS multicast routing scheme based on Genetic Algorithms (GA) heuristic is presented in this paper. Our proposal, called Constrained Cost–Bandwidth–Delay Genetic Algorithm (CCBD-GA), is applied to a multilayer hybrid platform that includes High Altitude Platforms (HAPs) and a Satellite platform. This GA scheme has been compared with another GA well-known in the literature called Multi-Objective Genetic Algorithm (MOGA) in order to show the proposed algorithm goodness. In order to test the efficiency of GA schemes on a multicast routing protocol, these GA schemes are inserted into an enhanced version of the Core-Based Tree (CBT) protocol with QoS support. CBT and GA schemes are tested in a multilayer hybrid HAP and Satellite architecture and interesting results have been discovered. The joint bandwidth–delay metrics can be very useful in hybrid platforms such as that considered, because it is possible to take advantage of the single characteristics of the Satellite and HAP segments. The HAP segment offers low propagation delay permitting QoS constraints based on maximum end-to-end delay to be met. The Satellite segment, instead, offers high bandwidth capacity with higher propagation delay. The joint bandwidth–delay metric permits the balancing of the traffic load respecting both QoS constraints. Simulation results have been evaluated in terms of HAP and Satellite utilization, bandwidth, end-to-end delay, fitness function and cost of the GA schemes.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Task Offloading in Vehicular Edge Computing Networks: A Load-Balancing Solution Recently, the rapid advance of vehicular networks has led to the emergence of diverse delay-sensitive vehicular applications such as automatic driving, auto navigation. Note that existing resource-constrained vehicles cannot adequately meet these demands on low / ultra-low latency. By offloading parts of the vehicles’ compute-intensive tasks to the edge servers in proximity, mobile edge computing is envisioned as a promising paradigm, giving rise to the vehicular edge computing networks (VECNs). However, most existing works on task offloading in VECNs did not take the load balancing of the computation resources at the edge servers into account. To address these issues and given the high dynamics of vehicular networks, we introduce fiber-wireless (FiWi) technology to enhance VECNs, due to its advantages on centralized network management and supporting multiple communication techniques. Aiming to minimize the processing delay of the vehicles’ computation tasks, we propose a software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs, where SDN is introduced to provide supports for the centralized network and vehicle information management. Extensive analysis and numerical results corroborate that our proposed load-balancing scheme can achieve superior performance on processing delay reduction by utilizing the edge servers’ computation resources more efficiently.
A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots Autonomous mobile robots navigating in changing and dynamic unstructured environments like the outdoor environments need to cope with large amounts of uncertainties that are inherent of natural environments. The traditional type-1 fuzzy logic controller (FLC) using precise type-1 fuzzy sets cannot fully handle such uncertainties. A type-2 FLC using type-2 fuzzy sets can handle such uncertainties to produce a better performance. In this paper, we present a novel reactive control architecture for autonomous mobile robots that is based on type-2 FLC to implement the basic navigation behaviors and the coordination between these behaviors to produce a type-2 hierarchical FLC. In our experiments, we implemented this type-2 architecture in different types of mobile robots navigating in indoor and outdoor unstructured and challenging environments. The type-2-based control system dealt with the uncertainties facing mobile robots in unstructured environments and resulted in a very good performance that outperformed the type-1-based control system while achieving a significant rule reduction compared to the type-1 system.
Multi-stage genetic programming: A new strategy to nonlinear system modeling This paper presents a new multi-stage genetic programming (MSGP) strategy for modeling nonlinear systems. The proposed strategy is based on incorporating the individual effect of predictor variables and the interactions among them to provide more accurate simulations. According to the MSGP strategy, an efficient formulation for a problem comprises different terms. In the first stage of the MSGP-based analysis, the output variable is formulated in terms of an influencing variable. Thereafter, the error between the actual and the predicted value is formulated in terms of a new variable. Finally, the interaction term is derived by formulating the difference between the actual values and the values predicted by the individually developed terms. The capabilities of MSGP are illustrated by applying it to the formulation of different complex engineering problems. The problems analyzed herein include the following: (i) simulation of pH neutralization process, (ii) prediction of surface roughness in end milling, and (iii) classification of soil liquefaction conditions. The validity of the proposed strategy is confirmed by applying the derived models to the parts of the experimental results that were not included in the analyses. Further, the external validation of the models is verified using several statistical criteria recommended by other researchers. The MSGP-based solutions are capable of effectively simulating the nonlinear behavior of the investigated systems. The results of MSGP are found to be more accurate than those of standard GP and artificial neural network-based models.
Placing Virtual Machines to Optimize Cloud Gaming Experience Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider's total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) fool pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Practical consensus of homogeneous sampled-data multi-agent systems The aim of this paper is to study the second-order practical consensus of homogeneous sampled-data multiagent systems (MASs). To do this, a new nonlinear emulation strategy based on homogeneity is developed. It is then applied to MASs under synchronously variable sampling. Finally, a comparison with the classical linear strategy is provided in the case of MASs under synchronously periodic sampling.
Generalized dilations and numerically solving discrete-time homogeneous optimization problems We introduce generalized dilations, a broader class of operators than that of dilations, and consider homogeneity with respect to this new class of dilations. For discrete-time systems that are asymptotically controllable and homogeneous (with degree zero) we propose a method to numerically approximate any homogeneous value function (solution to an infinite horizon optimization problem) to arbitrary accuracy. We also show that the method can be used to generate an offline computed stabilizing feedback law.
Delay-independent stability of homogeneous systems. A class of nonlinear systems with homogeneous right-hand sides and time-varying delay is studied. It is assumed that the trivial solution of a system is asymptotically stable when delay is equal to zero. By the usage of the Lyapunov direct method and the Razumikhin approach, it is proved that the asymptotic stability of the zero solution of the system is preserved for an arbitrary continuous nonnegative and bounded delay. The conditions of stability of time-delay systems by homogeneous approximation are obtained. Furthermore, it is shown that the presented approaches permit to derive delay-independent stability conditions for some types of nonlinear systems with distributed delay. Two examples of nonlinear oscillatory systems are given to demonstrate the effectiveness of our results.
Exponential Stability Of Homogeneous Impulsive Positive Delay Systems Of Degree One This paper investigates the global exponential stability of homogeneous impulsive positive delay systems of degree one. By using the max-separable Lyapunov functions, a sufficient criterion is obtained for exponential stability of continuous-time homogeneous impulsive positive delay systems of degree one. We also provide the corresponding counterpart for discrete-time homogeneous impulsive positive delay systems of degree one. Our results show that a stable impulse-free system can keep its original stability property under certain destabilising impulsive perturbations. It should be noted that it's the first time that the exponential stability results for homogeneous impulsive positive delay systems of degree one are given. Numerical examples are provided to demonstrate the effectiveness of the derived results.
Stabilization of Stochastic Nonlinear Delay Systems with Exogenous Disturbances and the Event-Triggered Feedback Control This paper is devoted to study the stabilization problem of stochastic nonlinear delay systems with exogenous disturbances and the event-triggered feedback control. By introducing the notation of input-to-state practical stability and an event-triggered strategy, we establish the input-to-state practically exponential mean-square stability of the suggested system. Moreover, we investigate the stabilization result by designing the feedback gain matrix and the event-triggered feedback controller, which is expressed in terms of linear matrix inequalities (LMIs). Also, the lower bounds of inter-execution times by the proposed event-triggered control method are obtained. Finally, an example is given to show the effectiveness of the proposed method. Compared with a large number of results for discrete-time stochastic systems, only a few results have appeared on the event-triggered control for continuous-time stochastic systems. In particular, there has been no published papers on the event-triggered control for continuous-time stochastic delay systems. This work is a first try to fill the gap on the topic.
Preasymptotic Stability and Homogeneous Approximations of Hybrid Dynamical Systems Hybrid dynamical systems are systems that combine features of continuous-time dynamical systems and discrete-time dynamical systems, and can be modeled by a combination of differential equations or inclusions, difference equations or inclusions, and constraints. Preasymptotic stability is a concept that results from separating the conditions that asymptotic stability places on the behavior of solutions from issues related to existence of solutions. In this paper, techniques for approximating hybrid dynamical systems that generalize classical linearization techniques are proposed. The approximation techniques involve linearization, tangent cones, homogeneous approximations of functions and set-valued mappings, and tangent homogeneous cones, where homogeneity is considered with respect to general dilations. The main results deduce preasymptotic stability of an equilibrium point for a hybrid dynamical system from preasymptotic stability of the equilibrium point for an approximate system. Further results relate the degree of homogeneity of a hybrid system to the Zeno phenomenon that can appear in the solutions of the system.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these "Sybil attacks" is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users...
Automated Flower Classification over a Large Number of Classes We investigate to what extent combinations of features can improve classification performance on a large dataset of similar classes. To this end we introduce a 103 class flower dataset. We compute four different features for the flowers, each describing different aspects, namely the local shape/texture, the shape of the boundary, the overall spatial distribution of petals, and the colour. We combine the features using a multiple kernel framework with a SVM classifier. The weights for each class are learnt using the method of Varma and Ray [16], which has achieved state of the art performance on other large dataset, such as Caltech 101/256. Our dataset has a similar challenge in the number of classes, but with the added difficulty of large between class similarity and small within class similarity. Results show that learning the optimum kernel combination of multiple features vastly improves the performance, from 55.1% for the best single feature to 72.8% for the combination of all features.
A Comparative Study of Distributed Learning Environments on Learning Outcomes Advances in information and communication technologies have fueled rapid growth in the popularity of technology-supported distributed learning (DL). Many educational institutions, both academic and corporate, have undertaken initiatives that leverage the myriad of available DL technologies. Despite their rapid growth in popularity, however, alternative technologies for DL are seldom systematically evaluated for learning efficacy. Considering the increasing range of information and communication technologies available for the development of DL environments, we believe it is paramount for studies to compare the relative learning outcomes of various technologies.In this research, we employed a quasi-experimental field study approach to investigate the relative learning effectiveness of two collaborative DL environments in the context of an executive development program. We also adopted a framework of hierarchical characteristics of group support system (GSS) technologies, outlined by DeSanctis and Gallupe (1987), as the basis for characterizing the two DL environments.One DL environment employed a simple e-mail and listserv capability while the other used a sophisticated GSS (herein referred to as Beta system). Interestingly, the learning outcome of the e-mail environment was higher than the learning outcome of the more sophisticated GSS environment. The post-hoc analysis of the electronic messages indicated that the students in groups using the e-mail system exchanged a higher percentage of messages related to the learning task. The Beta system users exchanged a higher level of technology sense-making messages. No significant difference was observed in the students' satisfaction with the learning process under the two DL environments.
NETWRAP: An NDN Based Real-TimeWireless Recharging Framework for Wireless Sensor Networks Using vehicles equipped with wireless energy transmission technology to recharge sensor nodes over the air is a game-changer for traditional wireless sensor networks. The recharging policy regarding when to recharge which sensor nodes critically impacts the network performance. So far only a few works have studied such recharging policy for the case of using a single vehicle. In this paper, we propose NETWRAP, an N DN based Real Time Wireless Rech arging Protocol for dynamic wireless recharging in sensor networks. The real-time recharging framework supports single or multiple mobile vehicles. Employing multiple mobile vehicles provides more scalability and robustness. To efficiently deliver sensor energy status information to vehicles in real-time, we leverage concepts and mechanisms from named data networking (NDN) and design energy monitoring and reporting protocols. We derive theoretical results on the energy neutral condition and the minimum number of mobile vehicles required for perpetual network operations. Then we study how to minimize the total traveling cost of vehicles while guaranteeing all the sensor nodes can be recharged before their batteries deplete. We formulate the recharge optimization problem into a Multiple Traveling Salesman Problem with Deadlines (m-TSP with Deadlines), which is NP-hard. To accommodate the dynamic nature of node energy conditions with low overhead, we present an algorithm that selects the node with the minimum weighted sum of traveling time and residual lifetime. Our scheme not only improves network scalability but also ensures the perpetual operation of networks. Extensive simulation results demonstrate the effectiveness and efficiency of the proposed design. The results also validate the correctness of the theoretical analysis and show significant improvements that cut the number of nonfunctional nodes by half compared to the static scheme while maintaining the network overhead at the same level.
Modeling taxi driver anticipatory behavior. As part of a wider behavioral agent-based model that simulates taxi drivers' dynamic passenger-finding behavior under uncertainty, we present a model of strategic behavior of taxi drivers in anticipation of substantial time varying demand at locations such as airports and major train stations. The model assumes that, considering a particular decision horizon, a taxi driver decides to transfer to such a destination based on a reward function. The dynamic uncertainty of demand is captured by a time dependent pick-up probability, which is a cumulative distribution function of waiting time. The model allows for information learning by which taxi drivers update their beliefs from past experiences. A simulation on a real road network, applied to test the model, indicates that the formulated model dynamically improves passenger-finding strategies at the airport. Taxi drivers learn when to transfer to the airport in anticipation of the time-varying demand at the airport to minimize their waiting time.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Blockchain-based asymmetric group key agreement protocol for internet of vehicles Data sharing and group communication are the main applications for Internet of Vehicles (IoV). IoV has some obvious characteristics, such as terminals resources are limited, sensitive information is easy to leak and the whole network is vulnerable to some attacks. Therefore, in the process of terminal communication for IoV, it is urgent to construct an efficient group key to encrypt the communication contents to protect the security of communication contents and user’s privacy. In this work, we propose a blockchain-based asymmetric group key agreement protocol for IoV (B-AGKA), in which blockchain anonymous authentication technology is adopted to achieve user’s privacy protection. And in this scheme, the calculation is distributed to each node in a balanced manner, reducing the calculation and communication overhead; for traceability and accountability, the blockchain logging techniques are adopted. In the whole process, we show that the proposed protocol is proven secure under the hardness assumption of decision bilinear Diffie–Hellman (DBDH) problem. The performance analysis shows that the proposed scheme is more efficient than existing works.
Reliable Computation Offloading for Edge-Computing-Enabled Software-Defined IoV Internet of Vehicles (IoV) has drawn great interest recent years. Various IoV applications have emerged for improving the safety, efficiency, and comfort on the road. Cloud computing constitutes a popular technique for supporting delay-tolerant entertainment applications. However, for advanced latency-sensitive applications (e.g., auto/assisted driving and emergency failure management), cloud computing may result in excessive delay. Edge computing, which extends computing and storage capabilities to the edge of the network, emerges as an attractive technology. Therefore, to support these computationally intensive and latency-sensitive applications in IoVs, in this article, we integrate mobile-edge computing nodes (i.e., mobile vehicles) and fixed edge computing nodes (i.e., fixed road infrastructures) to provide low-latency computing services cooperatively. For better exploiting these heterogeneous edge computing resources, the concept of software-defined networking (SDN) and edge-computing-aided IoV (EC-SDIoV) is conceived. Moreover, in a complex and dynamic IoV environment, the outage of both processing nodes and communication links becomes inevitable, which may have life-threatening consequences. In order to ensure the completion with high reliability of latency-sensitive IoV services, we introduce both partial computation offloading and reliable task allocation with the reprocessing mechanism to EC-SDIoV. Since the optimization problem is nonconvex and NP-hard, a heuristic algorithm, fault-tolerant particle swarm optimization algorithm is designed for maximizing the reliability (FPSO-MR) with latency constraints. Performance evaluation results validate that the proposed scheme is indeed capable of reducing the latency as well as improving the reliability of the EC-SDIoV.
Energy Efficiency Resource Allocation For D2d Communication Network Based On Relay Selection In order to solve the problem of spectrum resource shortage and energy consumption, we put forward a new model that combines with D2D communication and energy harvesting technology: energy harvesting-aided D2D communication network under the cognitive radio (EHA-CRD), where the D2D users harvest energy from the base station and the D2D source communicate with D2D destination by D2D relays. Our goals are to investigate the maximization energy efficiency (EE) of the network by joint time allocation and relay selection while taking into the constraints of the signal-to-noise ratio of D2D and the rates of the Cellular users. During this process, the energy collection time and communication time are randomly allocated. The maximization problem of EE can be divided into two sub-problems: (1) relay selection problem; (2) time optimization problem. For the first sub-problem, we propose a weighted sum maximum algorithm to select the best relay. For the last sub-problem, the EE maximization problem is non-convex problem with time. Thus, by using fractional programming theory, we transform it into a standard convex optimization problem, and we propose the optimization iterative algorithm to solve the convex optimization problem for obtaining the optimal solution. And, the simulation results show that the proposed relay selection algorithm and time optimization algorithm are significantly improved compared with the existing algorithms.
Repeated Game Analysis for Cooperative MAC With Incentive Design for Wireless Networks. Cooperative communications offer appealing potentials to improve quality of service (QoS) for wireless networks. Many existing works on cooperative communications assume that participation in cooperative relaying is unconditional. In practice, however, due to resource consumption, it is vital to provide incentives for selfish cooperating peer nodes. In this paper, we analyze a cooperative medium a...
Blockchain for the Internet of Vehicles: A Decentralized IoT Solution for Vehicles Communication Using Ethereum. The concept of smart cities has become prominent in modern metropolises due to the emergence of embedded and connected smart devices, systems, and technologies. They have enabled the connection of every "thing" to the Internet. Therefore, in the upcoming era of the Internet of Things, the Internet of Vehicles (IoV) will play a crucial role in newly developed smart cities. The IoV has the potential to solve various traffic and road safety problems effectively in order to prevent fatal crashes. However, a particular challenge in the IoV, especially in Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications, is to ensure fast, secure transmission and accurate recording of the data. In order to overcome these challenges, this work is adapting Blockchain technology for real time application (RTA) to solve Vehicle-to-Everything (V2X) communications problems. Therefore, the main novelty of this paper is to develop a Blockchain-based IoT system in order to establish secure communication and create an entirely decentralized cloud computing platform. Moreover, the authors qualitatively tested the performance and resilience of the proposed system against common security attacks. Computational tests showed that the proposed solution solved the main challenges of Vehicle-to-X (V2X) communications such as security, centralization, and lack of privacy. In addition, it guaranteed an easy data exchange between different actors of intelligent transportation systems.
Occlusion-Aware Detection For Internet Of Vehicles In Urban Traffic Sensing Systems Vehicle detection is a fundamental challenge in urban traffic surveillance video. Due to the powerful representation ability of convolution neural network (CNN), CNN-based detection approaches have achieve incredible success on generic object detection. However, they can't deal well with vehicle occlusion in complex urban traffic scene. In this paper, we present a new occlusion-aware vehicle detection CNN framework, which is an effective and efficient framework for vehicle detection. First, we concatenate the low-level and high-level feature maps to capture more robust feature representation, then we fuse the local and global feature maps for handling vehicle occlusion, the context information is also been adopted in our framework. Extensive experiments demonstrate the competitive performance of our proposed framework. Our methods achieve better effect than primal Faster R-CNN in terms of accuracy on a new urban traffic surveillance dataset (UTSD) which contains a mass of occlusion vehicles and complex scenes.
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize. An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees' swarming around their hive is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used for optimizing multivariable functions and the results produced by ABC, Genetic Algorithm (GA), Particle Swarm Algorithm (PSO) and Particle Swarm Inspired Evolutionary Algorithm (PS-EA) have been compared. The results showed that ABC outperforms the other algorithms.
BeCome: Blockchain-Enabled Computation Offloading for IoT in Mobile Edge Computing Benefiting from the real-time processing ability of edge computing, computing tasks requested by smart devices in the Internet of Things are offloaded to edge computing devices (ECDs) for implementation. However, ECDs are often overloaded or underloaded with disproportionate resource requests. In addition, during the process of task offloading, the transmitted information is vulnerable, which can result in data incompleteness. In view of this challenge, a blockchain-enabled computation offloading method, named BeCome, is proposed in this article. Blockchain technology is employed in edge computing to ensure data integrity. Then, the nondominated sorting genetic algorithm III is adopted to generate strategies for balanced resource allocation. Furthermore, simple additive weighting and multicriteria decision making are utilized to identify the optimal offloading strategy. Finally, performance evaluations of BeCome are given through simulation experiments.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
The concept of flow in collaborative game-based learning Generally, high-school students have been characterized as bored and disengaged from the learning process. However, certain educational designs promote excitement and engagement. Game-based learning is assumed to be such a design. In this study, the concept of flow is used as a framework to investigate student engagement in the process of gaming and to explain effects on game performance and student learning outcome. Frequency 1550, a game about medieval Amsterdam merging digital and urban play spaces, has been examined as an exemplar of game-based learning. This 1-day game was played in teams by 216 students of three schools for secondary education in Amsterdam. Generally, these students show flow with their game activities, although they were distracted by solving problems in technology and navigation. Flow was shown to have an effect on their game performance, but not on their learning outcome. Distractive activities and being occupied with competition between teams did show an effect on the learning outcome of students: the fewer students were distracted from the game and the more they were engaged in group competition, the more students learned about the medieval history of Amsterdam. Consequences for the design of game-based learning in secondary education are discussed.
Segmentation-Based Image Copy-Move Forgery Detection Scheme In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Design and Validation of a Cable-Driven Asymmetric Back Exosuit Lumbar spine injuries caused by repetitive lifting rank as the most prevalent workplace injury in the United States. While these injuries are caused by both symmetric and asymmetric lifting, asymmetric is often more damaging. Many back devices do not address asymmetry, so we present a new system called the Asymmetric Back Exosuit (ABX). The ABX addresses this important gap through unique design geometry and active cable-driven actuation. The suit allows the user to move in a wide range of lumbar trajectories while the “X” pattern cable routing allows variable assistance application for these trajectories. We also conducted a biomechanical analysis in OpenSim to map assistive cable force to effective lumbar torque assistance for a given trajectory, allowing for intuitive controller design in the lumbar joint space over the complex kinematic chain for varying lifting techniques. Human subject experiments illustrated that the ABX reduced lumbar erector spinae muscle activation during symmetric and asymmetric lifting by an average of 37.8% and 16.0%, respectively, compared to lifting without the exosuit. This result indicates the potential for our device to reduce lumbar injury risk.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Enabling Extreme Fast Charging Technology for Electric Vehicles As a significant part of the next-generation smart grid, electric vehicles (EVs) are essential for most countries to achieve energy independence, secure energy supply, and alleviate the pressure on environmental protection and energy security. Although EVs have grown rapidly, the slow recharge time is still the biggest obstacle to a wider application. While gasoline vehicles can pump enough gasoline in less than ten minutes, which can carry themselves a few hundred miles. However, most of today’s fast-charging techniques take half an hour only to provide very limited miles of electric driving range.
Optimizing the Deployment of Electric Vehicle Charging Stations Using Pervasive Mobility Data. With the recent advances in battery technology and the resulting decrease in the charging times, public charging stations are becoming a viable option for Electric Vehicle (EV) drivers. Concurrently, emergence and the wide-spread use of location-tracking devices in mobile phones and wearable devices has paved the way to track individual-level human movements to an unprecedented spatial and temporal grain. Motivated by these developments, we propose a novel methodology to perform data-driven optimization of EV charging station locations. We formulate the problem as a discrete optimization problem on a geographical grid, with the objective of covering the entire demand region while minimizing a measure of drivers’ total excess driving distance to reach charging stations, the related energy overhead, and the number of charging stations. Since optimally solving the problem is computationally infeasible, we present computationally efficient solutions based on the genetic algorithm. We then apply the proposed methodology to optimize EV charging stations layout in the city of Boston, starting from Call Detail Records (CDR) of one million users over the span of 4 months. The results show that the genetic algorithm provides solutions that significantly reduce drivers’ excess driving distance to charging stations, energy overhead, and the number of charging stations required compared to both a locally-optimized feasible solution and the current charging station deployment in the Boston metro area. We further investigate the robustness of the proposed methodology and show that building upon well-known regularity of aggregate human mobility patterns, the layout computed for demands based on the single day movements preserves its advantage also in later days and months. When collectively considered, the results presented in this paper indicate the potential of data-driven approaches for optimally placing public charging facilities at urban scale.
Optimal Planning Of Pev Charging Station With Single Output Multiple Cables Charging Spots Coordinated charging can alter the profile of plug-in electric vehicle charging load and reduce the required amount of charging spots by encouraging customers to use charging spots at off-peak hours. Therefore, real-time coordinated charging should be considered at the planning stage. To enhance charging station's utilization and save corresponding investment costs by incorporating coordinated charging, a new charging spot model, namely single output multiple cables charging spot (SOMC spot), is designed in this paper. A two-stage stochastic programming model is developed for planning a public parking lot charging station equipped with SOMC spots. The first stage of the programming model is planning of SOMC spots and its objective is to obtain an optimal configuration of the charging station to minimize the station's equivalent annual costs, including investment and operation costs. The second stage of the programming model involves a probabilistic simulation procedure, in which coordinated charging is simulated, so that the influence of coordinated charging on the planning is considered. A case study of a residential parking lot charging station verifies the effectiveness of the proposed planning model. And the proposed coordinated charging for SOMC spots shows great potential in saving equivalent annual costs for providing charging services.
Optimal Electric Vehicle Fast Charging Station Placement Based on Game Theoretical Framework. To reduce the air pollution and improve the energy efficiency, many countries and cities (e.g., Singapore) are on the way of introducing electric vehicles (EVs) to replace the vehicles serving in current traffic system. Effective placement of charging stations is essential for the rapid development of EVs, because it is necessary for providing convenience for EVs and ensuring the efficiency of the...
Optimal sizing of PEV fast charging stations with Markovian demand characterization Fast charging stations are critical infrastructures to enable high penetration of plug-in electric vehicles (PEVs) into future distribution networks. They need to be carefully planned to meet charging demand as well as ensure economic benefits. Accurate estimation of PEV charging demand is the prerequisite of such planning, but a nontrivial task. This paper addresses the sizing (number of chargers...
Time-Efficient Target Tags Information Collection in Large-Scale RFID Systems By integrating the micro-sensor on RFID tags to obtain the environment information, the sensor-augmented RFID system greatly supports the applications that are sensitive to environment. To quickly collect the information from all tags, many researchers dedicate on well arranging tag replying orders to avoid the signal collisions. Compared to from all tags, collecting information from a part of tag...
A survey on ear biometrics Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non-contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion, earprint forensics, ear symmetry, ear classification, and ear individuality. This article provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers.
DeepFace: Closing the Gap to Human-Level Performance in Face Verification In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.
Markov games as a framework for multi-agent reinforcement learning In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
Pors: proofs of retrievability for large files In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
On controller initialization in multivariable switching systems We consider a class of switched systems which consists of a linear MIMO and possibly unstable process in feedback interconnection with a multicontroller whose dynamics switch. It is shown how one can achieve significantly better transient performance by selecting the initial condition for every controller when it is inserted into the feedback loop. This initialization is obtained by performing the minimization of a quadratic cost function of the tracking error, controlled output, and control signal. We guarantee input-to-state stability of the closed-loop system when the average number of switches per unit of time is smaller than a specific value. If this is not the case then stability can still be achieved by adding a mild constraint to the optimization. We illustrate the use of our results in the control of a flexible beam actuated in torque. This system is unstable with two poles at the origin and contains several lightly damped modes, which can be easily excited by controller switching.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns This paper presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed "uniform" are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Excellent experimental results obtained in true problems of rotation invariance, where the classifier is trained at one particular rotation angle and tested with samples from other rotation angles, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns. These operators characterize the spatial configuration of local image texture and the performance can be further improved by combining them with rotation invariant variance measures that characterize the contrast of local image texture. The joint distributions of these orthogonal measures are shown to be very powerful tools for rotation invariant texture analysis.
Face morphing versus face averaging: Vulnerability and detection The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.
Lambertian Reflectance and Linear Subspaces We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.
The FERET Evaluation Methodology for Face-Recognition Algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.
Pattern Extraction Methods for Ear Biometrics - A Survey The Human Ear is a new class of relatively stable Biometrics that has drawn researcher's attention recently. Human ear is a perfect data for passive person identification, which can be applied to provide security in the public places. In this article, we overview the proposed Pattern extraction algorithms from 2D and 3D Ear images.
An efficient ear recognition technique invariant to illumination and pose This paper presents an efficient ear recognition technique which derives benefits from the local features of the ear and attempt to handle the problems due to pose, poor contrast, change in illumination and lack of registration. It uses (1) three image enhancement techniques in parallel to neutralize the effect of poor contrast, noise and illumination, (2) a local feature extraction technique (SURF) on enhanced images to minimize the effect of pose variations and poor image registration. SURF feature extraction is carried out on enhanced images to obtain three sets of local features, one for each enhanced image. Three nearest neighbor classifiers are trained on these three sets of features. Matching scores generated by all three classifiers are fused for final decision. The technique has been evaluated on two public databases, namely IIT Kanpur ear database and University of Notre Dame ear database (Collections E). Experimental results confirm that the use of proposed fusion significantly improves the recognition accuracy.
A human ear recognition method using nonlinear curvelet feature subspace Ear is a relatively new biometric among others. Many methods have been used for ear recognition to improve the performance of ear recognition systems. In continuation of these efforts, we propose a new ear recognition method based on curvelet transform. Features of the ear are computed by applying Fast Discrete Curvelet Transform via the wrapping technique. Feature vector of each image is composed of an approximate curvelet coefficient and second coarsest level curvelet coefficients at eight different angles. k-NN (k-nearest neighbour) is utilized as a classifier. The proposed method is experimented on two ear databases from IIT Delhi. Results achieved using the proposed method on publicly available ear database are up to 97.77% which show encouraging performance.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these "Sybil attacks" is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
TripRes: Traffic Flow Prediction Driven Resource Reservation for Multimedia IoV with Edge Computing AbstractThe Internet of Vehicles (IoV) connects vehicles, roadside units (RSUs) and other intelligent objects, enabling data sharing among them, thereby improving the efficiency of urban traffic and safety. Currently, collections of multimedia content, generated by multimedia surveillance equipment, vehicles, and so on, are transmitted to edge servers for implementation, because edge computing is a formidable paradigm for accommodating multimedia services with low-latency resource provisioning. However, the uneven or discrete distribution of the traffic flow covered by edge servers negatively affects the service performance (e.g., overload and underload) of edge servers in multimedia IoV systems. Therefore, how to accurately schedule and dynamically reserve proper numbers of resources for multimedia services in edge servers is still challenging. To address this challenge, a traffic flow prediction driven resource reservation method, called TripRes, is developed in this article. Specifically, the city map is divided into different regions, and the edge servers in a region are treated as a “big edge server” to simplify the complex distribution of edge servers. Then, future traffic flows are predicted using the deep spatiotemporal residual network (ST-ResNet), and future traffic flows are used to estimate the amount of multimedia services each region needs to offload to the edge servers. With the number of services to be offloaded in each region, their offloading destinations are determined through latency-sensitive transmission path selection. Finally, the performance of TripRes is evaluated using real-world big data with over 100M multimedia surveillance records from RSUs in Nanjing China.
Visual cryptography for general access structures A visual cryptography scheme for a set P of n participants is a method of encoding a secret image SI into n shadow images called shares, where each participant in P receives one share. Certain qualified subsets of participants can “visually” recover the secret image, but other, forbidden, sets of participants have no information (in an information-theoretic sense) on SI . A “visual” recovery for a set X ⊆ P consists of xeroxing the shares given to the participants in X onto transparencies, and then stacking them. The participants in a qualified set X will be able to see the secret image without any knowledge of cryptography and without performing any cryptographic computation. In this paper we propose two techniques for constructing visual cryptography schemes for general access structures. We analyze the structure of visual cryptography schemes and we prove bounds on the size of the shares distributed to the participants in the scheme. We provide a novel technique for realizing k out of n threshold visual cryptography schemes. Our construction for k out of n visual cryptography schemes is better with respect to pixel expansion than the one proposed by M. Naor and A. Shamir (Visual cryptography, in “Advances in Cryptology—Eurocrypt '94” CA. De Santis, Ed.), Lecture Notes in Computer Science, Vol. 950, pp. 1–12, Springer-Verlag, Berlin, 1995) and for the case of 2 out of n is the best possible. Finally, we consider graph-based access structures, i.e., access structures in which any qualified set of participants contains at least an edge of a given graph whose vertices represent the participants of the scheme.
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
Simultaneous wireless information and power transfer in modern communication systems Energy harvesting for wireless communication networks is a new paradigm that allows terminals to recharge their batteries from external energy sources in the surrounding environment. A promising energy harvesting technology is wireless power transfer where terminals harvest energy from electromagnetic radiation. Thereby, the energy may be harvested opportunistically from ambient electromagnetic sources or from sources that intentionally transmit electromagnetic energy for energy harvesting purposes. A particularly interesting and challenging scenario arises when sources perform simultaneous wireless information and power transfer (SWIPT), as strong signals not only increase power transfer but also interference. This article provides an overview of SWIPT systems with a particular focus on the hardware realization of rectenna circuits and practical techniques that achieve SWIPT in the domains of time, power, antennas, and space. The article also discusses the benefits of a potential integration of SWIPT technologies in modern communication networks in the context of resource allocation and cooperative cognitive radio networks.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.065473
0.06
0.030216
0.019411
0.007124
0.004268
0.001022
0
0
0
0
0
0
0
Adaptive Fault-Tolerant Consensus for a Class of Uncertain Nonlinear Second-Order Multi-Agent Systems With Circuit Implementation. In this paper, a robust fault-tolerant consensus control strategy and its circuit implementation method are proposed for a class of nonlinear second-order leader-following multi-agent systems against multiple actuator faults and time-varying state/input-dependent system uncertainties. The faults of partial loss of actuator effectiveness and bias-actuators are considered without knowing eventual fa...
Design of fault diagnosis filters and fault-tolerant control for a class of nonlinear systems This paper presents a set of algorithms for fault diagnosis and fault tolerant control strategy for affine nonlinear systems subjected to an unknown time-varying fault vector. First, the design of fault diagnosis filter is performed using nonlinear observer techniques, where the system is decoupled through a nonlinear transformation and an observer is used to generate the required residual signal. By introducing an extra input to the observer, a direct estimation of the time-varying fault is obtained when the residual is controlled, by this extra input, to zero. The stability analysis of this observer is proved and some relevant sufficient conditions are obtained. Using the estimated fault vector, a fault tolerant controller is established which guarantees the stability of the closed loop system. The proposed algorithm is applied to a combined pH and consistency control system of a pilot paper machine, where simulations are performed to show the effectiveness of the proposed approach.
Distributed Tracking Control for Linear Multiagent Systems With a Leader of Bounded Unknown Input This technical note considers the distributed tracking control problem of multiagent systems with general linear dynamics and a leader whose control input is nonzero and not available to any follower. Based on the relative states of neighboring agents, two distributed discontinuous controllers with, respectively, static and adaptive coupling gains, are designed for each follower to ensure that the states of the followers converge to the state of the leader, if the interaction graph among the followers is undirected, the leader has directed paths to all followers, and the leader's control input is bounded. A sufficient condition for the existence of the distributed controllers is that each agent is stabilizable. Simulation examples are given to illustrate the theoretical results.
Adaptive dynamic surface control of a class of nonlinear systems with unknown direction control gains and input saturation. In this paper, adaptive neural network based dynamic surface control (DSC) is developed for a class of nonlinear strict-feedback systems with unknown direction control gains and input saturation. A Gaussian error function based saturation model is employed such that the backstepping technique can be used in the control design. The explosion of complexity in traditional backstepping design is avoided by utilizing DSC. Based on backstepping combined with DSC, adaptive radial basis function neural network control is developed to guarantee that all the signals in the closed-loop system are globally bounded, and the tracking error converges to a small neighborhood of origin by appropriately choosing design parameters. Simulation results demonstrate the effectiveness of the proposed approach and the good performance is guaranteed even though both the saturation constraints and the wrong control direction are occurred.
Consensus Tracking Control of Uncertain Multiagent Systems With Sampled Data and Time-Varying Delay In this article, the adaptive consensus tracking control is developed for uncertain multiagent systems with time-varying state delay in the case that leader’s state is accessible at sampling instants. By proposing a distributed sampled observer with hybrid form, adaptive tracking controller with the complementary term is designed for first-order multiagent systems, and then is extended to high-ord...
Practical output synchronization for asynchronously switched multi-agent systems with adaption to fast-switching perturbations The asynchronously switched multi-agent systems comprising switched agents of different dynamics and switching signals are considered under arbitrarily switching communication topologies. The practical output synchronization problem is studied for such a kind of systems due to the heterogeneity brought by both the dynamics and the switchings of agents. A switching-dependent controller with an embedded virtual reference system is proposed for each agent. The original problem is then converted into tracking problems between each agent and its reference system. The analysis of resultant tracking error systems involves the analysis of switched systems with bounded but non-attenuating state impulses. By satisfying sufficient conditions featuring the average dwell time (ADT) and the newly proposed piecewise ADT, the practical output synchronization can be achieved and the ultimate bound of the output errors can also be obtained for the considered systems. Furthermore, a realistic case where the agent switching signals undergo adverse fast-switching perturbations is studied. The perturbations may potentially invalidate the “slow-switching” based method. A regulation strategy is thus developed for each agent to render it adaption to such adversity. A payload transport task is taken as the practical example to illustrate the effectiveness of the proposed method and the adaption strategy.
Command Filtered Adaptive Backstepping Implementation of adaptive backstepping controllers requires analytic calculation of the partial derivatives of certain stabilizing functions. It is well documented that, as the order of a nonlinear system increases, analytic calculation of these derivatives becomes prohibitive. Therefore, in practice, either alternative control approaches are used or the derivatives are neglected in the implementation. Neglecting the derivatives results in the loss of all guarantees proven by Lyapunov methods for the adaptive backstepping approach and may result in instability. This paper presents a new implementation approach for adaptive backstepping control. The main objectives are to facilitate the derivation and implementation of the adaptive backstepping approach, with performance guarantees proven by Lyapunov methods, for applications that were prohibitively difficult using the standard analytic implementation approach. The new approach uses filtering methods to produce certain command signals and their derivatives which eliminates the requirement of analytic differentiation. The approach also introduces filters to generate certain compensating signals necessary to compute compensated tracking errors suitable for adaptive parameter estimation. We present a set of Lemmas and Theorems to analyze the performance both during the initialization and the operating phases. We show that the initialization phase is of finite duration that can be controlled by selection of a design parameter. We also show that all signals within the system are bounded during this short initialization phase. During the operating phase, we show that the command filtered implementation approach has theoretical properties identical to those of the conventional approach. The general approach is presented and analyzed for systems in generalized parameter strict feedback form. Extensions of the approach are presented to demonstrate the application of the method to a land vehicle trajectory following applicat- on. Application and effectiveness of the proposed method is shown by simulation results.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Task Offloading in Vehicular Edge Computing Networks: A Load-Balancing Solution Recently, the rapid advance of vehicular networks has led to the emergence of diverse delay-sensitive vehicular applications such as automatic driving, auto navigation. Note that existing resource-constrained vehicles cannot adequately meet these demands on low / ultra-low latency. By offloading parts of the vehicles’ compute-intensive tasks to the edge servers in proximity, mobile edge computing is envisioned as a promising paradigm, giving rise to the vehicular edge computing networks (VECNs). However, most existing works on task offloading in VECNs did not take the load balancing of the computation resources at the edge servers into account. To address these issues and given the high dynamics of vehicular networks, we introduce fiber-wireless (FiWi) technology to enhance VECNs, due to its advantages on centralized network management and supporting multiple communication techniques. Aiming to minimize the processing delay of the vehicles’ computation tasks, we propose a software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs, where SDN is introduced to provide supports for the centralized network and vehicle information management. Extensive analysis and numerical results corroborate that our proposed load-balancing scheme can achieve superior performance on processing delay reduction by utilizing the edge servers’ computation resources more efficiently.
Visual cryptography for general access structures A visual cryptography scheme for a set P of n participants is a method of encoding a secret image SI into n shadow images called shares, where each participant in P receives one share. Certain qualified subsets of participants can “visually” recover the secret image, but other, forbidden, sets of participants have no information (in an information-theoretic sense) on SI . A “visual” recovery for a set X ⊆ P consists of xeroxing the shares given to the participants in X onto transparencies, and then stacking them. The participants in a qualified set X will be able to see the secret image without any knowledge of cryptography and without performing any cryptographic computation. In this paper we propose two techniques for constructing visual cryptography schemes for general access structures. We analyze the structure of visual cryptography schemes and we prove bounds on the size of the shares distributed to the participants in the scheme. We provide a novel technique for realizing k out of n threshold visual cryptography schemes. Our construction for k out of n visual cryptography schemes is better with respect to pixel expansion than the one proposed by M. Naor and A. Shamir (Visual cryptography, in “Advances in Cryptology—Eurocrypt '94” CA. De Santis, Ed.), Lecture Notes in Computer Science, Vol. 950, pp. 1–12, Springer-Verlag, Berlin, 1995) and for the case of 2 out of n is the best possible. Finally, we consider graph-based access structures, i.e., access structures in which any qualified set of participants contains at least an edge of a given graph whose vertices represent the participants of the scheme.
Design and simulation of a joint-coupled orthosis for regulating FES-aided gait A hybrid functional electrical stimulation (FES)/orthosis system is being developed which combines two channels of (surface-electrode-based) electrical stimulation with a computer-controlled orthosis for the purpose of restoring gait to spinal cord injured (SCI) individuals (albeit with a stability aid, such as a walker). The orthosis is an energetically passive, controllable device which 1) unidirectionally couples hip to knee flexion; 2) aids hip and knee flexion with a spring assist; and 3) incorporates sensors and modulated friction brakes, which are used in conjunction with electrical stimulation for the feedback control of joint (and therefore limb) trajectories. This paper describes the hybrid FES approach and the design of the joint coupled orthosis. A dynamic simulation of an SCI individual using the hybrid approach is described, and results from the simulation are presented that indicate the promise of the JCO approach.
Solving the data sparsity problem in destination prediction Destination prediction is an essential task for many emerging location-based applications such as recommending sightseeing places and targeted advertising according to destinations. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, almost all the existing techniques use various kinds of extra information such as road network, proprietary travel planner, statistics requested from government, and personal driving habits. Such extra information, in most circumstances, is unavailable or very costly to obtain. Thereby we approach the task of destination prediction by using only historical trajectory dataset. However, this approach encounters the \"data sparsity problem\", i.e., the available historical trajectories are far from enough to cover all possible query trajectories, which considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) to address the data sparsity problem. SubSyn first decomposes historical trajectories into sub-trajectories comprising two adjacent locations, and then connects the sub-trajectories into \"synthesised\" trajectories. This process effectively expands the historical trajectory dataset to contain much more trajectories. Experiments based on real datasets show that SubSyn can predict destinations for up to ten times more query trajectories than a baseline prediction algorithm. Furthermore, the running time of the SubSyn-training algorithm is almost negligible for a large set of 1.9 million trajectories, and the SubSyn-prediction algorithm runs over two orders of magnitude faster than the baseline prediction algorithm constantly.
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at this http URL
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
Probabilistic region failure-aware data center network and content placement. Data center network (DCN) and content placement with the consideration of potential large-scale region failure is critical to minimize the DCN loss and disruptions under such catastrophic scenario. This paper considers the optimal placement of DCN and content for DCN failure probability minimization against a region failure. Given a network for DCN placement, a general probabilistic region failure model is adopted to capture the key features of a region failure and to determine the failure probability of a node/link in the network under the region failure. We then propose a general grid partition-based scheme to flexibly define the global nonuniform distribution of potential region failure in terms of its occurring probability and intensity. Such grid partition scheme also helps us to evaluate the vulnerability of a given network under a region failure and thus to create a \"vulnerability map\" for DCN and content placement in the network. With the help of the \"vulnerability map\", we further develop an integer linear program (ILP)-based theoretical framework to identify the optimal placement of DCN and content, which leads to the minimum DCN failure probability against a region failure. A heuristic is also suggested to make the overall placement problem more scalable for large-scale networks. Finally, an example and extensive numerical results are provided to illustrate the proposed DCN and content placement.
All quiet on the internet front? With the proliferation and increasing dependence of many services and applications on the Internet, this network has become a vital societal asset. This creates the need to protect this critical infrastructure, and over the past years a variety of resilience schemes have been proposed. The effectiveness of protection schemes, however, highly depends on the causes and circumstances of Internet fail...
Assessing network vulnerability under probabilistic region failure model The mission critical network infrastructures are facing potential large region threats, both intentional (like EMP attack, bomb explosion) and natural (like earthquake, flooding). The available research on region failure related vulnerability studies generally adopt a kind of simple “deterministic” region failure models, which can not capture some important features of real region failure scenarios, where a network component in the region only fails with certain probability, and more importantly, such failure probability tends to vary with both its dimension and its distance to failure center. In this paper, we provide a more general “probabilistic” region failure model to capture the key features of a region failure and apply it for the network vulnerability assessment. To facilitate such assessment, we adopt a grid partition-based scheme to estimate various statistical network metrics under a random region failure. A theoretical framework is also established to determine a suitable grid partition such that a specified estimation error requirement is satisfied. The grid partition technique is also useful for identifying the vulnerable zones of a network, which can guide network designers to initiate proper network protection against such failures. The work in this paper helps us more deeply understand the network vulnerability behavior under region failure and facilitates the design and maintenance of future highly survivable mission critical networks.
Finding critical regions and region-disjoint paths in a network Due to their importance to society, communication networks should be built and operated to withstand failures. However, cost considerations make network providers less inclined to take robustness measures against failures that are unlikely to manifest, like several failures coinciding simultaneously in different geographic regions of their network. Considering networks embedded in a two-dimensional plane, we study the problem of finding a critical region---a part of the network that can be enclosed by a given elementary figure of predetermined size---whose destruction would lead to the highest network disruption. We determine that only a polynomial, in the input, number of nontrivial positions for such a figure needs to be considered and propose a corresponding polynomial-time algorithm. In addition, we consider region-aware network augmentation to decrease the impact of a regional failure. We subsequently address the region-disjoint paths problem, which asks for two paths with minimum total weight between a source (s) and a destination (d) that cannot both be cut by a single regional failure of diameter D (unless that failure includes s or d). We prove that deciding whether region-disjoint paths exist is NP-hard and propose a heuristic region-disjoint paths algorithm.
The resilience of WDM networks to probabilistic geographical failures Telecommunications networks, and in particular optical WDM networks, are vulnerable to large-scale failures in their physical infrastructure, resulting from physical attacks (such as an electromagnetic pulse attack) or natural disasters (such as solar flares, earthquakes, and floods). Such events happen at specific geographical locations and disrupt specific parts of the network, but their effects cannot be determined exactly in advance. Therefore, we provide a unified framework to model network vulnerability when the event has a probabilistic nature, defined by an arbitrary probability density function. Our framework captures scenarios with a number of simultaneous attacks, when network components consist of several dependent subcomponents, and in which either a 1 + 1 or a 1:1 protection plan is in place. We use computational geometric tools to provide efficient algorithms to identify vulnerable points within the network under various metrics. Then, we obtain numerical results for specific backbone networks, demonstrating the applicability of our algorithms to real-world scenarios. Our novel approach allows to identify locations that require additional protection efforts (e.g., equipment shielding). Overall, the paper demonstrates that using computational geometric techniques can significantly contribute to our understanding of network resilience.
Elastic optical networking: a new dawn for the optical layer? Optical networks are undergoing significant changes, fueled by the exponential growth of traffic due to multimedia services and by the increased uncertainty in predicting the sources of this traffic due to the ever changing models of content providers over the Internet. The change has already begun: simple on-off modulation of signals, which was adequate for bit rates up to 10 Gb/s, has given way ...
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
An online mechanism for multi-unit demand and its application to plug-in hybrid electric vehicle charging We develop an online mechanism for the allocation of an expiring resource to a dynamic agent population. Each agent has a non-increasing marginal valuation function for the resource, and an upper limit on the number of units that can be allocated in any period. We propose two versions on a truthful allocation mechanism. Each modifies the decisions of a greedy online assignment algorithm by sometimes cancelling an allocation of resources. One version makes this modification immediately upon an allocation decision while a second waits until the point at which an agent departs the market. Adopting a prior-free framework, we show that the second approach has better worst-case allocative efficiency and is more scalable. On the other hand, the first approach (with immediate cancellation) may be easier in practice because it does not need to reclaim units previously allocated. We consider an application to recharging plug-in hybrid electric vehicles (PHEVs). Using data from a real-world trial of PHEVs in the UK, we demonstrate higher system performance than a fixed price system, performance comparable with a standard, but non-truthful scheduling heuristic, and the ability to support 50% more vehicles at the same fuel cost than a simple randomized policy.
IoT-U: Cellular Internet-of-Things Networks Over Unlicensed Spectrum. In this paper, we consider an uplink cellular Internet-of-Things (IoT) network, where a cellular user (CU) can serve as the mobile data aggregator for a cluster of IoT devices. To be specific, the IoT devices can either transmit the sensory data to the base station (BS) directly by cellular communications, or first aggregate the data to a CU through machine-to-machine (M2M) communications before t...
A communication robot in a shopping mall This paper reports our development of a communication robot for use in a shopping mall to provide shopping information, offer route guidance, and build rapport. In the development, the major difficulties included sensing human behaviors, conversation in a noisy daily environment, and the needs of unexpected miscellaneous knowledge in the conversation. We chose a networkrobot system approach, where a single robot's poor sensing capability and knowledge are supplemented by ubiquitous sensors and a human operator. The developed robot system detects a person with floor sensors to initiate interaction, identifies individuals with radio-frequency identification (RFID) tags, gives shopping information while chatting, and provides route guidance with deictic gestures. The robotwas partially teleoperated to avoid the difficulty of speech recognition as well as to furnish a new kind of knowledge that only humans can flexibly provide. The information supplied by a human operator was later used to increase the robot's autonomy. For 25 days in a shopping mall, we conducted a field trial and gathered 2642 interactions. A total of 235 participants signed up to use RFID tags and, later, provided questionnaire responses. The questionnaire results are promising in terms of the visitors' perceived acceptability as well as the encouragement of their shopping activities. The results of the teleoperation analysis revealed that the amount of teleoperation gradually decreased, which is also promising.
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.208
0.208
0.104
0.104
0.027667
0.0032
0
0
0
0
0
0
0
0
High Capacity Adaptive Image Steganography with Cover Region Selection using Dual-Tree Complex Wavelet Transform The importance of image steganography is unquestionable in the field of secure multimedia communication. Imperceptibility and high payload capacity are some of the crucial parts of any mode of steganography. The proposed work is an attempt to modify the edge-based image steganography which provides higher payload capacity and imperceptibility by making use of machine learning techniques. The approach uses an adaptive embedding process over Dual-Tree Complex Wavelet Transform (DT-CWT) subband coefficients. Machine learning based optimization techniques are employed here to embed the secret data over optimal cover-image-blocks with minimal retrieval error. The embedding process will create a unique secret key which is imperative for the retrieval of data and need to be transmitted to the receiver side via a secure channel. This enhances the security concerns and avoids data hacking by intruders. The algorithm performance is evaluated with standard benchmark parameters like PSNR, SSIM, CF, Retrieval error, BPP and Histogram. The results of the proposed method show the stego-image with PSNR above 50 dB even with a dense embedding of up to 7.87 BPP. This clearly indicates that the proposed work surpasses the state-of-the-art image steganographic systems significantly.
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
Genetic Optimization Of Radial Basis Probabilistic Neural Networks This paper discusses using genetic algorithms (CA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed CA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed CA is also suitable for optimizing the radial basis function neural networks (RBFNN).
Current status and key issues in image steganography: A survey. Steganography and steganalysis are the prominent research fields in information hiding paradigm. Steganography is the science of invisible communication while steganalysis is the detection of steganography. Steganography means “covered writing” that hides the existence of the message itself. Digital steganography provides potential for private and secure communication that has become the necessity of most of the applications in today’s world. Various multimedia carriers such as audio, text, video, image can act as cover media to carry secret information. In this paper, we have focused only on image steganography. This article provides a review of fundamental concepts, evaluation measures and security aspects of steganography system, various spatial and transform domain embedding schemes. In addition, image quality metrics that can be used for evaluation of stego images and cover selection measures that provide additional security to embedding scheme are also highlighted. Current research trends and directions to improve on existing methods are suggested.
Hybrid local and global descriptor enhanced with colour information. Feature extraction is one of the most important steps in computer vision tasks such as object recognition, image retrieval and image classification. It describes an image by a set of descriptors where the best one gives a high quality description and a low computation. In this study, the authors propose a novel descriptor called histogram of local and global features using speeded up robust featur...
Secure visual cryptography for medical image using modified cuckoo search. Optimal secure visual cryptography for brain MRI medical image is proposed in this paper. Initially, the brain MRI images are selected and then discrete wavelet transform is applied to the brain MRI image for partitioning the image into blocks. Then Gaussian based cuckoo search algorithm is utilized to select the optimal position for every block. Next the proposed technique creates the dual shares from the secret image. Then the secret shares are embedded in the corresponding positions of the blocks. After embedding, the extraction operation is carried out. Here visual cryptographic design is used for the purpose of image authentication and verification. The extracted secret image has dual shares, based on that the receiver views the input image. The authentication and verification of medical image are assisted with the help of target database. All the secret images are registered previously in the target database. The performance of the proposed method is estimated by Peak Signal to Noise Ratio (PSNR), Mean square error (MSE) and normalized correlation. The implementation is done by MATLAB platform.
Digital watermarking techniques for image security: a review Multimedia technology usages is increasing day by day and to provide authorized data and protecting the secret information from unauthorized use is highly difficult and involves a complex process. By using the watermarking technique, only authorized user can use the data. Digital watermarking is a widely used technology for the protection of digital data. Digital watermarking deals with the embedding of secret data into actual information. Digital watermarking techniques are classified into three major categories, and they were based on domain, type of document (text, image, music or video) and human perception. Performance of the watermarked images is analysed using Peak signal to noise ratio, mean square error and bit error rate. Watermarking of images has been researched profoundly for its specialized and modern achievability in all media applications such as copyrights protection, medical reports (MRI scan and X-ray), annotation and privacy control. This paper reviews the watermarking technique and its merits and demerits.
A New Efficient Medical Image Cipher Based On Hybrid Chaotic Map And Dna Code In this paper, we propose a novel medical image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and a new hybrid chaotic map. Our study uses DNA sequences and operations and the chaotic hybrid map to strengthen the cryptosystem. The significant advantages of this approach consist in improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext, has a large key space and a high key sensitivity, and is able to resist against the statistical and exhaustive attacks.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Assist-As-Needed Training Paradigms For Robotic Rehabilitation Of Spinal Cord Injuries This paper introduces a new "assist-as-needed" (AAN) training paradigm for rehabilitation of spinal cord injuries via robotic training devices. In the pilot study reported in this paper, nine female adult Swiss-Webster mice were divided into three groups, each experiencing a different robotic training control strategy: a fixed training trajectory (Fixed Group, A), an AAN training method without interlimb coordination (Band Group, B), and an AAN training method with bilateral hindlimb coordination (Window Group, C). Fourteen days after complete transection at the mid-thoracic level, the mice were robotically trained to step in the presence of an acutely administered serotonin agonist, quipazine, for a period of six weeks. The mice that received AAN training (Groups B and C) show higher levels of recovery than Group A mice, as measured by the number, consistency, and periodicity of steps realized during testing sessions. Group C displays a higher incidence of alternating stepping than Group B. These results indicate that this training approach may be more effective than fixed trajectory paradigms in promoting robust post-injury stepping behavior. Furthermore, the constraint of interlimb coordination appears to be an important contribution to successful training.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
Experimental and theoretical analysis of human arm trajectories in 3D movements In this study, experiments of three-dimensional (3D) arm movements have been conducted, and the positions of marks provided on a shoulder, an elbow, and a hand are observed. In addition to the investigation of such trajectories in a 3D space, these trajectories are projected to a sagittal, a frontal, and a transverse planes. Then, specific features in these planes are investigated, and detailed properties of human arm trajectories have been uncovered. Next, kinematical features of a human arm during a movement have been analyzed on the basis of behavior of joint angles. Here, a kinematical arm model with joint redundancy is defined, and the kinematics of the model is reconstructed from the measured trajectories. Subsequently, the trajectories of all joint angles during a movement are obtained using the inverse kinematics, and their properties are analyzed in detail. The result shows that the angular trajectories are remarkably similar to those which are produced under the minimum angular jerk criterion.
Eye-vergence visual servoing enhancing Lyapunov-stable trackability Visual servoing methods for hand---eye configuration are vulnerable for hand's dynamical oscillation, since nonlinear dynamical effects of whole manipulator stand against the stable tracking ability (trackability). Our proposal to solve this problem is that the controller for visual servoing of the hand and the one for eye-vergence should be separated independently based on decoupling each other, where the trackability is verified by Lyapunov analysis. Then the effectiveness of the decoupled hand and eye-vergence visual servoing method is evaluated through simulations incorporated with actual dynamics of 7-DoF robot with additional 3-DoF for eye-vergence mechanism by amplitude and phase frequency analysis.
Real-time Monocular Object SLAM. We present a real-time object-based SLAM system that leverages the largest object database to date. Our approach comprises two main components: (1) a monocular SLAM algorithm that exploits object rigidity constraints to improve the map and find its real scale, and (2) a novel object recognition algorithm based on bags of binary words, which provides live detections with a database of 500 3D objects. The two components work together and benefit each other: the SLAM algorithm accumulates information from the observations of the objects, anchors object features to especial map landmarks and sets constrains on the optimization. At the same time, objects partially or fully located within the map are used as a prior to guide the recognition algorithm, achieving higher recall. We evaluate our proposal on five real environments showing improvements on the accuracy of the map and efficiency with respect to other state-of-the-art techniques.
Service robot system with an informationally structured environment Daily life assistance is one of the most important applications for service robots. For comfortable assistance, service robots must recognize the surrounding conditions correctly, including human motion, the position of objects, and obstacles. However, since the everyday environment is complex and unpredictable, it is almost impossible to sense all of the necessary information using only a robot and sensors attached to it. In order to realize a service robot for daily life assistance, we have been developing an informationally structured environment using distributed sensors embedded in the environment. The present paper introduces a service robot system with an informationally structured environment referred to the ROS-TMS. This system enables the integration of various data from distributed sensors, as well as storage of these data in an on-line database and the planning of the service motion of a robot using real-time information about the surroundings. In addition, we discuss experiments such as detection and fetch-and-give tasks using the developed real environment and robot. Introduction of architecture and components of the ROS-TMS.Integration of various data from distributed sensors for service robot system.Object detection system (ODS) using RGB-D camera.Motion planning for a fetch-and-give task using a wagon and a humanoid robot.Handing over an object to a human using manipulability of both a robot and a human.
A transformable wheel-legged mobile robot: Design, analysis and experiment. This paper proposes a new type of transformable wheel-legged mobile robot that could be applied on both flat and rugged terrains. It integrates stability and maneuverability of wheeled robot and obstacle climbing capability of legged robot by means of a wheel-legged transformable mechanism. These two modes can be switched easily with two spokes touching terrain. In this paper, the motion analysis of the proposed robot under wheeled mode, legged mode and transformable mode are carried out after briefly introducing the concept and control system design. Then, the obstacle climbing strategies under wheeled and legged modes are obtained. Finally, a prototype of the proposed robot is designed and manufactured based upon the simulation analysis. And the experiment results validate the effectiveness of the proposed transformable wheel-legged mobile robot.
Teleoperation of robot arm with position measurement via angle-pixel characteristic and visual supporting function In this paper, a teleoperation system of a robot arm with position measurement function and visual supporting function is developed. The working robot arm is remotely controlled by the manual operation of the human operator and the autonomous control via visual servo. The visual servo employs the template matching technique. The position measurement is realized using a stereo camera based on the angle-pixel characteristic. The visual supporting function to give the human operator useful information about the teleoperation is also provided. The usefulness of the proposed teleoperation system is confirmed through experiments using an industrial articulated robot arm.
Intra-protocol repeatability and inter-protocol agreement for the analysis of scapulo-humeral coordination. Multi-center clinical trials incorporating shoulder kinematics are currently uncommon. The absence of repeatability and limits of agreement (LoA) studies between different centers employing different motion analysis protocols has led to a lack dataset compatibility. Therefore, the aim of this work was to determine the repeatability and LoA between two shoulder kinematic protocols. The first one uses a scapula tracker (ST), the International Society of Biomechanics anatomical frames and an optoelectronic measurement system, and the second uses a spine tracker, the INAIL Shoulder and Elbow Outpatient protocol (ISEO) and an inertial and magnetic measurement system. First within-protocol repeatability for each approach was assessed on a group of 23 healthy subjects and compared with the literature. Then, the between-protocol agreement was evaluated. The within-protocol repeatability was similar for the ST ([Formula: see text] = 2.35°, [Formula: see text] = 0.97°, SEM = 2.5°) and ISEO ([Formula: see text] = 2.24°, [Formula: see text] = 0.97°, SEM = 2.3°) protocols and comparable with data from published literature. The between-protocol agreement analysis showed comparable scapula medio-lateral rotation measurements for up to 120° of flexion-extension and up to 100° of scapula plane ab-adduction. Scapula protraction-retraction measurements were in agreement for a smaller range of humeral elevation. The results of this study suggest comparable repeatability for the ST and ISEO protocols and between-protocol agreement for two scapula rotations. Different thresholds for repeatability and LoA may be adapted to suit different clinical hypotheses.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Multi-Armed Bandit-Based Client Scheduling for Federated Learning By exploiting the computing power and local data of distributed clients, federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy. In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels. However, latency caused by hundreds to thousands of communication rounds remains a bottleneck in FL. To minimize the training latency, this work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients. Firstly, we propose a CS algorithm based on the upper confidence bound policy (CS-UCB) for ideal scenarios where local datasets of clients are independent and identically distributed (i.i.d.) and balanced. An upper bound of the expected performance regret of the proposed CS-UCB algorithm is provided, which indicates that the regret grows logarithmically over communication rounds. Then, to address non-ideal scenarios with non-i.i.d. and unbalanced properties of local datasets and varying availability of clients, we further propose a CS algorithm based on the UCB policy and virtual queue technique (CS-UCB-Q). An upper bound is also derived, which shows that the expected performance regret of the proposed CS-UCB-Q algorithm can have a sub-linear growth over communication rounds under certain conditions. Besides, the convergence performance of FL training is also analyzed. Finally, simulation results validate the efficiency of the proposed algorithms.
Experiment-driven Characterization of Full-Duplex Wireless Systems We present an experiment-based characterization of passive suppression and active self-interference cancellation mechanisms in full-duplex wireless communication systems. In particular, we consider passive suppression due to antenna separation at the same node, and active cancellation in analog and/or digital domain. First, we show that the average amount of cancellation increases for active cance...
A Model for Understanding How Virtual Reality Aids Complex Conceptual Learning Designers and evaluators of immersive virtual reality systems have many ideas concerning how virtual reality can facilitate learning. However, we have little information concerning which of virtual reality's features provide the most leverage for enhancing understanding or how to customize those affordances for different learning environments. In part, this reflects the truly complex nature of learning. Features of a learning environment do not act in isolation; other factors such as the concepts or skills to be learned, individual characteristics, the learning experience, and the interaction experience all play a role in shaping the learning process and its outcomes. Through Project Science Space, we have been trying to identify, use, and evaluate immersive virtual reality's affordances as a means to facilitate the mastery of complex, abstract concepts. In doing so, we are beginning to understand the interplay between virtual reality's features and other important factors in shaping the learning process and learning outcomes for this type of material. In this paper, we present a general model that describes how we think these factors work together and discuss some of the lessons we are learning about virtual reality's affordances in the context of this model for complex conceptual learning.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.075654
0.066667
0.066667
0.066667
0.066667
0.025185
0.002667
0
0
0
0
0
0
0
On Service Resilience in Cloud-Native 5G Mobile Systems. To cope with the tremendous growth in mobile data traffic on one hand, and the modest average revenue per user on the other hand, mobile operators have been exploring network virtualization and cloud computing technologies to build cost-efficient and elastic mobile networks and to have them offered as a cloud service. In such cloud-based mobile networks, ensuring service resilience is an important challenge to tackle. Indeed, high availability and service reliability are important requirements of carrier grade, but not necessarily intrinsic features of cloud computing. Building a system that requires the five nines reliability on a platform that may not always grant it is, therefore, a hurdle. Effectively, in carrier cloud, service resilience can be heavily impacted by a failure of any network function (NF) running on a virtual machine (VM). In this paper, we introduce a framework, along with efficient and proactive restoration mechanisms, to ensure service resilience in carrier cloud. As restoration of a NF failure impacts a potential number of users, adequate network overload control mechanisms are also proposed. A mathematical model is developed to evaluate the performance of the proposed mechanisms. The obtained results are encouraging and demonstrate that the proposed mechanisms efficiently achieve their design goals.
A MTC traffic generation and QCI priority-first scheduling algorithm over LTE As (M2M) Machine-To-Machine, communication continues to grow rapidly, a full study on overload control approach to manage the data and signaling of H2H traffic from massive MTC devices is required. In this paper, a new M2M resource-scheduling algorithm for Long Term Evolution (LTE) is proposed. It provides Quality of Service (QoS) guarantee to Guaranteed Bit Rate (GBR) services, we set priorities for the critical M2M services to guarantee the transportation of GBR services, which have high QoS needs. Additionally, we simulate and compare different methods and offer further observations on the solution design.
Automating Diagnosis of Cellular Radio Access Network Problems. In an increasingly mobile connected world, our user experience of mobile applications more and more depends on the performance of cellular radio access networks (RAN). To achieve high quality of experience for the user, it is imperative that operators identify and diagnose performance problems quickly. In this paper, we describe our experience in understanding the challenges in automating the diagnosis of RAN performance problems. Working with a major cellular network operator on a part of their RAN that services more than 2 million users, we demonstrate that fine-grained modeling and analysis could be the key towards this goal. We describe our methodology in analyzing RAN problems, and highlight a few of our findings, some previously unknown. We also discuss lessons from our attempt at building automated diagnosis solutions.
Single and Multi-Agent Deep Reinforcement Learning for AI-Enabled Wireless Networks: A Tutorial Deep Reinforcement Learning (DRL) has recently witnessed significant advances that have led to multiple successes in solving sequential decision-making problems in various domains, particularly in wireless communications. The next generation of wireless networks is expected to provide scalable, low-latency, ultra-reliable services empowered by the application of data-driven Artificial Intelligence...
Multi-agent deep reinforcement learning: a survey The advances in reinforcement learning have recorded sublime success in various domains. Although the multi-agent domain has been overshadowed by its single-agent counterpart during this progress, multi-agent reinforcement learning gains rapid traction, and the latest accomplishments address problems with real-world complexity. This article provides an overview of the current developments in the field of multi-agent deep reinforcement learning. We focus primarily on literature from recent years that combines deep reinforcement learning methods with a multi-agent scenario. To survey the works that constitute the contemporary landscape, the main contents are divided into three parts. First, we analyze the structure of training schemes that are applied to train multiple agents. Second, we consider the emergent patterns of agent behavior in cooperative, competitive and mixed scenarios. Third, we systematically enumerate challenges that exclusively arise in the multi-agent domain and review methods that are leveraged to cope with these challenges. To conclude this survey, we discuss advances, identify trends, and outline possible directions for future work in this research area.
Network Slicing and Softwarization: A Survey on Principles, Enabling Technologies, and Solutions. Network slicing has been identified as the backbone of the rapidly evolving 5G technology. However, as its consolidation and standardization progress, there are no literatures that comprehensively discuss its key principles, enablers, and research challenges. This paper elaborates network slicing from an end-to-end perspective detailing its historical heritage, principal concepts, enabling technol...
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Adam: A Method for Stochastic Optimization. We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
Untangling Blockchain: A Data Processing View of Blockchain Systems. Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core ...
Multivariate Short-Term Traffic Flow Forecasting Using Time-Series Analysis Existing time-series models that are used for short-term traffic condition forecasting are mostly univariate in nature. Generally, the extension of existing univariate time-series models to a multivariate regime involves huge computational complexities. A different class of time-series models called structural time-series model (STM) (in its multivariate form) has been introduced in this paper to develop a parsimonious and computationally simple multivariate short-term traffic condition forecasting algorithm. The different components of a time-series data set such as trend, seasonal, cyclical, and calendar variations can separately be modeled in STM methodology. A case study at the Dublin, Ireland, city center with serious traffic congestion is performed to illustrate the forecasting strategy. The results indicate that the proposed forecasting algorithm is an effective approach in predicting real-time traffic flow at multiple junctions within an urban transport network.
Dynamic transfer among alternative controllers and its relation to antiwindup controller design Advanced control strategies and modern consulting provide new challenges for the classical problem of bumpless transfer. It can, for example, be necessary to transfer between an only approximately known existing analog controller and a new digital or adaptive controller without accessing any states. Transfer ought to be bidirectional and not presuppose steady state, so that an immediate back-transfer is possible if the new controller should drive the plant unstable. We present a scheme that meets these requirements. By casting the problem of bidirectional transfer into an associated tracking control problem, systematic analysis and design procedures from control theory can be applied. The associated control problem also has a correspondence to the design of antiwindup controllers. The paper includes laboratory and industrial applications.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Dynamic Wi-Fi fingerprinting indoor positioning system In this paper, a technique is proposed to improve the accuracy of indoor positioning systems based on Wi-Fi radio-frequency signals by using dynamic access points and fingerprints (DAFs). Moreover, an indoor position system that relies solely in DAFs is proposed. The walking pattern of indoor users is classified as dynamic or static for indoor positioning purposes. We demonstrate that the performance of a conventional indoor positioning system that uses static fingerprints can be enhanced by considering dynamic fingerprints and access points. The accuracy of the system is evaluated using four positioning algorithms and one access point selection strategy. The system facilitates the location of people where there is no wireless local area network (WLAN) infrastructure deployed or where the WLAN infrastructure has been drastically affected, for example by natural disasters. The system can be used for search and rescue operations and for expanding the coverage of an indoor positioning system.
Indoor Fingerprint Positioning Based on Wi-Fi: An Overview. The widely applied location-based services require a high standard for positioning technology. Currently, outdoor positioning has been a great success; however, indoor positioning technologies are in the early stages of development. Therefore, this paper provides an overview of indoor fingerprint positioning based on Wi-Fi. First, some indoor positioning technologies, especially the Wi-Fi fingerprint indoor positioning technology, are introduced and discussed. Second, some evaluation metrics and influence factors of indoor fingerprint positioning technologies based on Wi-Fi are introduced. Third, methods and algorithms of fingerprint indoor positioning technologies are analyzed, classified, and discussed. Fourth, some widely used assistive positioning technologies are described. Finally, conclusions are drawn and future possible research interests are discussed. It is hoped that this research will serve as a stepping stone for those interested in advancing indoor positioning.
Indoor Positioning in Large Shopping Mall with Context based Map Matching This paper focus on large indoor environment and proposes an accurate indoor positioning system with context based map matching. The proposed system adopts the widely used smartphone as the positioning platform of the customer, and mainly depends on the motion sensors in a smartphone. The proposed system firstly provides the initial trajectory of the customer with Pedestrian Dead Reckoning (PDR) and recognize various human activities which are meaningful for the localization. In this paper, we divided the activities into two types: (1) transition activity between floors, such as taking the escalator and elevator; (2) moving on the floor, such as walking outside store, shopping in store and tuning. The hierarchical Long Short-Term Memory (LSTM) Network based activity model is developed to recognize those actives. Secondly, those location-aware activities, PDR trajectories and 2.5D indoor map are integrated in a Hidden Markov Model (HMM) to conduct an accurate indoor positioning. Because the 2.5D map include the position information of indoor facilities such as escalators and each store, these positions information are used as the assistant information for conducting Context based Map Matching. The proposed method has 2.21-meter of positioning error mean and can achieve "shop level" performance.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these "Sybil attacks" is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Optimization Of Radio And Computational Resources For Energy Efficiency In Latency-Constrained Application Offloading Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Noninterference for a Practical DIFC-Based Operating System The Flume system is an implementation of decentralized information flow control (DIFC) at the operating system level. Prior work has shown Flume can be implemented as a practical extension to the Linux operating system, allowing real Web applications to achieve useful security guarantees. However, the question remains if the Flume system is actually secure. This paper compares Flume with other recent DIFC systems like Asbestos, arguing that the latter is inherently susceptible to certain wide-bandwidth covert channels, and proving their absence in Flume by means of a noninterference proof in the communicating sequential processes formalism.
Efficient and reliable low-power backscatter networks There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors. This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.
Internet of Things for Smart Cities The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
0
Accurate computation of Zernike moments in polar coordinates. An algorithm for high-precision numerical computation of Zernike moments is presented. The algorithm, based on the introduced polar pixel tiling scheme, does not exhibit the geometric error and numerical integration error which are inherent in conventional methods based on Cartesian coordinates. This yields a dramatic improvement of the Zernike moments accuracy in terms of their reconstruction and invariance properties. The introduced image tiling requires an interpolation algorithm which turns out to be of the second order importance compared to the discretization error. Various comparisons are made between the accuracy of the proposed method and that of commonly used techniques. The results reveal the great advantage of our approach.
Combined invariants to similarity transformation and to blur using orthogonal Zernike moments. The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better.
Fast computation of Jacobi-Fourier moments for invariant image recognition The Jacobi-Fourier moments (JFMs) provide a wide class of orthogonal rotation invariant moments (ORIMs) which are useful for many image processing, pattern recognition and computer vision applications. They, however, suffer from high time complexity and numerical instability at high orders of moment. In this paper, a fast method based on the recursive computation of radial kernel function of JFMs is proposed which not only reduces time complexity but also improves their numerical stability. Fast recursive method for the computation of Jacobi-Fourier moments is proposed.The proposed method not only reduces time complexity but also improves numerical stability of moments.Better image reconstruction is achieved with lower reconstruction error.Proposed method is useful for many image processing, pattern recognition and computer vision applications.
Radial shifted Legendre moments for image analysis and invariant image recognition. The rotation, scaling and translation invariant property of image moments has a high significance in image recognition. Legendre moments as a classical orthogonal moment have been widely used in image analysis and recognition. Since Legendre moments are defined in Cartesian coordinate, the rotation invariance is difficult to achieve. In this paper, we first derive two types of transformed Legendre polynomial: substituted and weighted radial shifted Legendre polynomials. Based on these two types of polynomials, two radial orthogonal moments, named substituted radial shifted Legendre moments and weighted radial shifted Legendre moments (SRSLMs and WRSLMs) are proposed. The proposed moments are orthogonal in polar coordinate domain and can be thought as generalized and orthogonalized complex moments. They have better image reconstruction performance, lower information redundancy and higher noise robustness than the existing radial orthogonal moments. At last, a mathematical framework for obtaining the rotation, scaling and translation invariants of these two types of radial shifted Legendre moments is provided. Theoretical and experimental results show the superiority of the proposed methods in terms of image reconstruction capability and invariant recognition accuracy under both noisy and noise-free conditions.
Robust circularly orthogonal moment based on Chebyshev rational function. The circularly orthogonal moments have been widely used in many computer vision applications. Unfortunately, they suffer from two errors namely numerical integration error and geometric error, which heavily degrade their reconstruction accuracy and pattern recognition performance. This paper describes a new kind of circularly orthogonal moments based on Chebyshev rational function. Unlike the conventional circularly orthogonal moments which have been defined in a unit disk, the proposed moment is defined in whole polar coordinates domain. In addition, given an order n, its radial projection function is smoother and oscillates at lower frequency compared with the existing circularly orthogonal moments, and so it is free of the geometric error and highly robust to the numerical integration error. Experimental results indicate that the proposed moments perform better in image reconstruction and pattern classification, and yield higher tolerance to image noise and smooth distortion in comparison with the existing circularly orthogonal moments.
The modified generic polar harmonic transforms for image representation This paper introduces four classes of orthogonal transforms by modifying the generic polar harmonic transforms. Then, the rotation invariant feature of the proposed transforms is investigated. Compared with the traditional generic polar harmonic transforms, the proposed transforms have the ability to describe the central region of the image with a parameter controlling the area of the region. Experimental results verified the image representation capability of the proposed transforms and showed better performance of the proposed transform in terms of rotation invariant pattern recognition.
Robust zero-watermarking algorithm based on polar complex exponential transform and logistic mapping. This paper introduces a new zero-watermarking algorithm based on polar complex exponential transform (PCET) and logistic mapping. This algorithm takes advantage of the geometric invariance of PCET to improve the robustness of the algorithm against geometric attacks, and the logistic mapping’s sensitivity to initial values to improve the security of the algorithm. First, the algorithm computes the PCET of the original grayscale image. Then it randomly selects PCET coefficients based on logistic mapping, and computes their magnitudes to obtain a binary feature image. Finally, it performs an exclusive-or operation between the binary feature image and the scrambled logo image to obtain the zero-watermark image. At the stage of copyright verification, the image copyright can be determined by performing the exclusive-or operation between the feature image and the verification image and comparing the resulting image to the original logo image. Experimental results show that this algorithm has excellent robustness against geometric attacks and common image processing attacks and better performance compared to other zero-watermarking algorithms.
Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
Are we ready for autonomous driving? The KITTI vision benchmark suite Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.
Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator This paper discusses many of the key implementationdecisions, including the choice of triangulation algorithmsand data structures, the steps taken to create and refine amesh, a number of issues that arise in Ruppert's algorithm,and the use of exact arithmetic.
Recall-Oriented Evaluation for Information Retrieval Systems. In a recall context, the user is interested in retrieving all relevant documents rather than retrieving a few that are at the top of the results list. In this article we propose ROM (Recall Oriented Measure) which takes into account the main elements that should be considered in evaluating information retrieval systems while ordering them in a way explicitly adapted to a recall context.
An FES-assisted training strategy combined with impedance control for a lower limb rehabilitation robot In order to investigate the feasibility of integrating functional electrical stimulation (FES) with robot-based rehabilitation training, this paper proposes an FES-assisted training strategy combined with impedance control for our self-made exoskeleton lower limb rehabilitation robot. This control strategy is carried out in a leg press task. Through impedance control, an active compliance of the robot is established, and the patient's voluntary effort to accomplish the task is inspired. During the training process, the patient's related muscles are applied with FES which provides an extra assistance to the patient. The intensity of the FES is properly chosen aiming to induce a desired active torque which is proportional to the voluntary effort of the patient. This kind of enhancement serves as a positive feedback which reminds the patient of the correct attempt to fulfill the desired motion. FES control is conducted by a combination of neural network-based feedforward controller and a PD feedback controller. The feasibility of this control strategy has been verified in Matlab.
TypeSQL: Knowledge-Based Type-Aware Neural Text-to-SQL Generation. Interacting with relational databases through natural language helps users of any background easily query and analyze a vast amount of data. This requires a system that understands usersu0027 questions and converts them to SQL queries automatically. In this paper we present a novel approach, TypeSQL, which views this problem as a slot filling task. Additionally, TypeSQL utilizes type information to better understand rare entities and numbers in natural language questions. We test this idea on the WikiSQL dataset and outperform the prior state-of-the-art by 5.5% in much less time. We also show that accessing the content of databases can significantly improve the performance when usersu0027 queries are not well-formed. TypeSQL gets 82.6% accuracy, a 17.5% absolute improvement compared to the previous content-sensitive model.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.054691
0.04
0.04
0.04
0.04
0.04
0.009333
0.000667
0.000021
0
0
0
0
0
A study on the use of statistical tests for experimentation with neural networks: Analysis of parametric test conditions and non-parametric tests In this paper, we focus on the experimental analysis on the performance in artificial neural networks with the use of statistical tests on the classification task. Particularly, we have studied whether the sample of results from multiple trials obtained by conventional artificial neural networks and support vector machines checks the necessary conditions for being analyzed through parametrical tests. The study is conducted by considering three possibilities on classification experiments: random variation in the selection of test data, the selection of training data and internal randomness in the learning algorithm. The results obtained state that the fulfillment of these conditions are problem-dependent and indefinite, which justifies the need of using non-parametric statistics in the experimental analysis.
A Multi-Layered Immune System For Graph Planarization Problem This paper presents a new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems. The proposed methodology is composed of five layers. After expressing the problem as, a suitable representation in the first layer, the search space and the features of the problem are estimated and extracted in the second and third layers, respectively. Through taking advantage of the minimized search space from estimation and the heuristic information from extraction, the antibodies (or solutions) are evolved in the fourth layer and finally the fittest antibody is exported. In order to demonstrate the efficiency of the proposed system, the graph planarization problem is tested. Simulation results based on several benchmark instances show that the proposed algorithm performs better than traditional algorithms.
An efficient two phase approach for solving reliability-redundancy allocation problem using artificial bee colony technique The main goal of the present paper is to present a two phase approach for solving the reliability-redundancy allocation problems (RRAP) with nonlinear resource constraints. In the first phase of the proposed approach, an algorithm based on artificial bee colony (ABC) is developed to solve the allocation problem while in the second phase an improvement of the solution as obtained by this algorithm is made. Four benchmark problems in the reliability-redundancy allocation and two reliability optimization problems have been taken to demonstrate the approach and it is shown by comparison that the solutions by the new proposed approach are better than the solutions available in the literature.
Ensemble of many-objective evolutionary algorithms for many-objective problems Abstract The performance of most existing multiobjective evolutionary algorithms deteriorates severely in the face of many-objective problems. Many-objective optimization has been gaining increasing attention, and many new many-objective evolutionary algorithms (MaOEA) have recently been proposed. On the one hand, solution sets with totally different characteristics are obtained by different MaOEAs, since different MaOEAs have different convergence-diversity tradeoff relations. This may suggest the potential usefulness of ensemble approaches of different MaOEAs. On the other hand, the performance of MaOEAs may vary greatly from one problem to another, so that choosing the most appropriate MaOEA is often a non-trivial task. Hence, an MaOEA that performs generally well on a set of problems is often desirable. This study proposes an ensemble of MaOEAs (EMaOEA) for many-objective problems. When solving a single problem, EMaOEA invests its computational budget to its constituent MaOEAs, runs them in parallel and maintains interactions between them by a simple information sharing scheme. Experimental results on 80 benchmark problems have shown that, by integrating the advantages of different MaOEAs into one framework, EMaOEA not only provides practitioners a unified framework for solving their problem set, but also may lead to better performance than a single MaOEA.
Improving Dendritic Neuron Model With Dynamic Scale-Free Network-Based Differential Evolution Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification, prediction, and other problems when its parameters are well-tuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and e...
Recent Advances in Evolutionary Computation Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of “biological evolution” toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily “evolutionary”. This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
Energy-Optimized Partial Computation Offloading in Mobile-Edge Computing With Genetic Simulated-Annealing-Based Particle Swarm Optimization Smart mobile devices (SMDs) can meet users' high expectations by executing computational intensive applications but they only have limited resources, including CPU, memory, battery power, and wireless medium. To tackle this limitation, partial computation offloading can be used as a promising method to schedule some tasks of applications from resource-limited SMDs to high-performance edge servers. However, it brings communication overhead issues caused by limited bandwidth and inevitably increases the latency of tasks offloaded to edge servers. Therefore, it is highly challenging to achieve a balance between high-resource consumption in SMDs and high communication cost for providing energy-efficient and latency-low services to users. This work proposes a partial computation offloading method to minimize the total energy consumed by SMDs and edge servers by jointly optimizing the offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels, and transmission power of each SMD in each time slot. It jointly considers the execution time of tasks performed in SMDs and edge servers, and transmission time of data. It also jointly considers latency limits, CPU speeds, transmission power limits, available energy of SMDs, and the maximum number of CPU cycles and memories in edge servers. Considering these factors, a nonlinear constrained optimization problem is formulated and solved by a novel hybrid metaheuristic algorithm named genetic simulated annealing-based particle swarm optimization (GSP) to produce a close-to-optimal solution. GSP achieves joint optimization of computation offloading between a cloud data center and the edge, and resource allocation in the data center. Real-life data-based experimental results prove that it achieves lower energy consumption in less convergence time than its three typical peers.
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
Understanding Taxi Service Strategies From Taxi GPS Traces Taxi service strategies, as the crowd intelligence of massive taxi drivers, are hidden in their historical time-stamped GPS traces. Mining GPS traces to understand the service strategies of skilled taxi drivers can benefit the drivers themselves, passengers, and city planners in a number of ways. This paper intends to uncover the efficient and inefficient taxi service strategies based on a large-scale GPS historical database of approximately 7600 taxis over one year in a city in China. First, we separate the GPS traces of individual taxi drivers and link them with the revenue generated. Second, we investigate the taxi service strategies from three perspectives, namely, passenger-searching strategies, passenger-delivery strategies, and service-region preference. Finally, we represent the taxi service strategies with a feature matrix and evaluate the correlation between service strategies and revenue, informing which strategies are efficient or inefficient. We predict the revenue of taxi drivers based on their strategies and achieve a prediction residual as less as 2.35 RMB/h,1 which demonstrates that the extracted taxi service strategies with our proposed approach well characterize the driving behavior and performance of taxi drivers.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.1
0.033333
0
0
0
0
0
0
0
A novel optimization booster algorithm. •Introducing a new meta-heuristic called Optimization Booster Algorithm (i.e. OBA).•OBA is inspired by human intelligent behavior in exchange markets.•Boosted algorithms often provided better solutions, while consuming less time.•OBA has improved feasibility, optimality and efficiency of the final results.
Multi-stage genetic programming: A new strategy to nonlinear system modeling This paper presents a new multi-stage genetic programming (MSGP) strategy for modeling nonlinear systems. The proposed strategy is based on incorporating the individual effect of predictor variables and the interactions among them to provide more accurate simulations. According to the MSGP strategy, an efficient formulation for a problem comprises different terms. In the first stage of the MSGP-based analysis, the output variable is formulated in terms of an influencing variable. Thereafter, the error between the actual and the predicted value is formulated in terms of a new variable. Finally, the interaction term is derived by formulating the difference between the actual values and the values predicted by the individually developed terms. The capabilities of MSGP are illustrated by applying it to the formulation of different complex engineering problems. The problems analyzed herein include the following: (i) simulation of pH neutralization process, (ii) prediction of surface roughness in end milling, and (iii) classification of soil liquefaction conditions. The validity of the proposed strategy is confirmed by applying the derived models to the parts of the experimental results that were not included in the analyses. Further, the external validation of the models is verified using several statistical criteria recommended by other researchers. The MSGP-based solutions are capable of effectively simulating the nonlinear behavior of the investigated systems. The results of MSGP are found to be more accurate than those of standard GP and artificial neural network-based models.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Hybrid Whale Optimization Algorithm with simulated annealing for feature selection. •Four hybrid feature selection methods for classification task are proposed.•Our hybrid method combines Whale Optimization Algorithm with simulated annealing.•Eighteen UCI datasets were used in the experiments.•Our approaches result a higher accuracy by using less number of features.
Solving the dynamic weapon target assignment problem by an improved artificial bee colony algorithm with heuristic factor initialization. •Put forward an improved artificial bee colony algorithm based on ranking selection and elite guidance.•Put forward 4 rule-based heuristic factors: Wc, Rc, TRc and TRcL.•The heuristic factors are used in population initialization to improve the quality of the initial solutions in DWTA solving.•The heuristic factor initialization method is combined with the improved ABC algorithm to solve the DWTA problem.
A monarch butterfly optimization-based neural network simulator for prediction of siro-spun yarn tenacity Yarn tenacity directly affects the winding and knitting efficiency as well as warp and weft breakages during weaving process and therefore, is considered as the most important parameter to be controlled during yarn spinning process. Yarn tenacity is dependent on fiber properties and process parameters. Exploring the relationship between fiber properties, process parameters and yarn tenacity is very important to optimize the selection of raw materials and improve yarn quality. In this study, an efficient monarch butterfly optimization-based neural network simulator called MBONN was developed to predict the tenacity of siro-spun yarns from some process parameters and fiber properties. To this end, an experimental dataset was obtained with fiber fineness, yarn twist factor, yarn linear density and strand spacing as the input variables and yarn tenacity as the output parameter. In the proposed MBONN, a monarch butterfly optimization algorithm is applied as a global search method to evolve weights of a multilayer perception (MLP) neural network. The prediction accuracy of the MBONN was compared with that of a MLP neural network trained with back propagation algorithm, MLP neural network trained with genetic algorithms and linear regression model. The results indicated that the prediction accuracy of the proposed MBONN is statistically superior to that of other models. The effect of fiber fineness, yarn linear density, twist factor and strand spacing on yarn tenacity was investigated using the proposed MBONN. Additionally, the observed trends in variation of yarn tenacity with fiber and process parameters were discussed with reference to the yarn internal structure. It was established that higher migration parameters result in increasing the siro-spun yarn tenacity. It was found that the yarns with higher migration parameters benefit from a more coherent self-locking structure which severely restricts fiber slippage, thereby increasing the yarn tenacity.
An improved artificial bee colony algorithm for balancing local and global search behaviors in continuous optimization The artificial bee colony, ABC for short, algorithm is population-based iterative optimization algorithm proposed for solving the optimization problems with continuously-structured solution space. Although ABC has been equipped with powerful global search capability, this capability can cause poor intensification on found solutions and slow convergence problem. The occurrence of these issues is originated from the search equations proposed for employed and onlooker bees, which only updates one decision variable at each trial. In order to address these drawbacks of the basic ABC algorithm, we introduce six search equations for the algorithm and three of them are used by employed bees and the rest of equations are used by onlooker bees. Moreover, each onlooker agent can modify three dimensions or decision variables of a food source at each attempt, which represents a possible solution for the optimization problems. The proposed variant of ABC algorithm is applied to solve basic, CEC2005, CEC2014 and CEC2015 benchmark functions. The obtained results are compared with results of the state-of-art variants of the basic ABC algorithm, artificial algae algorithm, particle swarm optimization algorithm and its variants, gravitation search algorithm and its variants and etc. Comparisons are conducted for measurement of the solution quality, robustness and convergence characteristics of the algorithms. The obtained results and comparisons show the experimentally validation of the proposed ABC variant and success in solving the continuous optimization problems dealt with the study.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning The Internet of vehicles (IoV) is a large information interaction network that collects information on vehicles, roads and pedestrians. One of the important uses of vehicle networks is to meet the entertainment needs of driving users through communication between vehicles and roadside units (RSUs). Due to the limited storage space of RSUs, determining the content cached in each RSU is a key challenge. With the development of 5G and video editing technology, short video systems have become increasingly popular. Current widely used cache update methods, such as partial file precaching and content popularity- and user interest-based determination, are inefficient for such systems. To solve this problem, this paper proposes a QoE-driven edge caching method for the IoV based on deep reinforcement learning. First, a class-based user interest model is established. Compared with the traditional file popularity- and user interest distribution-based cache update methods, the proposed method is more suitable for systems with a large number of small files. Second, a quality of experience (QoE)-driven RSU cache model is established based on the proposed class-based user interest model. Third, a deep reinforcement learning method is designed to address the QoE-driven RSU cache update issue effectively. The experimental results verify the effectiveness of the proposed algorithm.
Image information and visual quality Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.
Stabilization of switched continuous-time systems with all modes unstable via dwell time switching Stabilization of switched systems composed fully of unstable subsystems is one of the most challenging problems in the field of switched systems. In this brief paper, a sufficient condition ensuring the asymptotic stability of switched continuous-time systems with all modes unstable is proposed. The main idea is to exploit the stabilization property of switching behaviors to compensate the state divergence made by unstable modes. Then, by using a discretized Lyapunov function approach, a computable sufficient condition for switched linear systems is proposed in the framework of dwell time; it is shown that the time intervals between two successive switching instants are required to be confined by a pair of upper and lower bounds to guarantee the asymptotic stability. Based on derived results, an algorithm is proposed to compute the stability region of admissible dwell time. A numerical example is proposed to illustrate our approach.
Software-Defined Networking: A Comprehensive Survey The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms - with a focus on aspects such as resiliency, scalability, performance, security, and dependability - as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
Digital Image Watermarking Techniques: A Review Digital image authentication is an extremely significant concern for the digital revolution, as it is easy to tamper with any image. In the last few decades, it has been an urgent concern for researchers to ensure the authenticity of digital images. Based on the desired applications, several suitable watermarking techniques have been developed to mitigate this concern. However, it is tough to achieve a watermarking system that is simultaneously robust and secure. This paper gives details of standard watermarking system frameworks and lists some standard requirements that are used in designing watermarking techniques for several distinct applications. The current trends of digital image watermarking techniques are also reviewed in order to find the state-of-the-art methods and their limitations. Some conventional attacks are discussed, and future research directions are given.
Geometric attacks on image watermarking systems Synchronization errors can lead to significant performance loss in image watermarking methods, as the geometric attacks in the Stirmark benchmark software show. The authors describe the most common types of geometric attacks and survey proposed solutions.
Genetic Optimization Of Radial Basis Probabilistic Neural Networks This paper discusses using genetic algorithms (CA) to optimize the structure of radial basis probabilistic neural networks (RBPNN), including how to select hidden centers of the first hidden layer and to determine the controlling parameter of Gaussian kernel functions. In the process of constructing the genetic algorithm, a novel encoding method is proposed for optimizing the RBPNN structure. This encoding method can not only make the selected hidden centers sufficiently reflect the key distribution characteristic in the space of training samples set and reduce the hidden centers number as few as possible, but also simultaneously determine the optimum controlling parameters of Gaussian kernel functions matching the selected hidden centers. Additionally, we also constructively propose a new fitness function so as to make the designed RBPNN as simple as possible in the network structure in the case of not losing the network performance. Finally, we take the two benchmark problems of discriminating two-spiral problem and classifying the iris data, for example, to test and evaluate this designed GA. The experimental results illustrate that our designed CA can significantly reduce the required hidden centers number, compared with the recursive orthogonal least square algorithm (ROLSA) and the modified K-means algorithm (MKA). In particular, by means of statistical experiments it was proved that the optimized RBPNN by our designed GA, have still a better generalization performance with respect to the ones by the ROLSA and the MKA, in spite of the network scale having been greatly reduced. Additionally, our experimental results also demonstrate that our designed CA is also suitable for optimizing the radial basis function neural networks (RBFNN).
Current status and key issues in image steganography: A survey. Steganography and steganalysis are the prominent research fields in information hiding paradigm. Steganography is the science of invisible communication while steganalysis is the detection of steganography. Steganography means “covered writing” that hides the existence of the message itself. Digital steganography provides potential for private and secure communication that has become the necessity of most of the applications in today’s world. Various multimedia carriers such as audio, text, video, image can act as cover media to carry secret information. In this paper, we have focused only on image steganography. This article provides a review of fundamental concepts, evaluation measures and security aspects of steganography system, various spatial and transform domain embedding schemes. In addition, image quality metrics that can be used for evaluation of stego images and cover selection measures that provide additional security to embedding scheme are also highlighted. Current research trends and directions to improve on existing methods are suggested.
Hybrid local and global descriptor enhanced with colour information. Feature extraction is one of the most important steps in computer vision tasks such as object recognition, image retrieval and image classification. It describes an image by a set of descriptors where the best one gives a high quality description and a low computation. In this study, the authors propose a novel descriptor called histogram of local and global features using speeded up robust featur...
Secure visual cryptography for medical image using modified cuckoo search. Optimal secure visual cryptography for brain MRI medical image is proposed in this paper. Initially, the brain MRI images are selected and then discrete wavelet transform is applied to the brain MRI image for partitioning the image into blocks. Then Gaussian based cuckoo search algorithm is utilized to select the optimal position for every block. Next the proposed technique creates the dual shares from the secret image. Then the secret shares are embedded in the corresponding positions of the blocks. After embedding, the extraction operation is carried out. Here visual cryptographic design is used for the purpose of image authentication and verification. The extracted secret image has dual shares, based on that the receiver views the input image. The authentication and verification of medical image are assisted with the help of target database. All the secret images are registered previously in the target database. The performance of the proposed method is estimated by Peak Signal to Noise Ratio (PSNR), Mean square error (MSE) and normalized correlation. The implementation is done by MATLAB platform.
Digital watermarking techniques for image security: a review Multimedia technology usages is increasing day by day and to provide authorized data and protecting the secret information from unauthorized use is highly difficult and involves a complex process. By using the watermarking technique, only authorized user can use the data. Digital watermarking is a widely used technology for the protection of digital data. Digital watermarking deals with the embedding of secret data into actual information. Digital watermarking techniques are classified into three major categories, and they were based on domain, type of document (text, image, music or video) and human perception. Performance of the watermarked images is analysed using Peak signal to noise ratio, mean square error and bit error rate. Watermarking of images has been researched profoundly for its specialized and modern achievability in all media applications such as copyrights protection, medical reports (MRI scan and X-ray), annotation and privacy control. This paper reviews the watermarking technique and its merits and demerits.
A New Efficient Medical Image Cipher Based On Hybrid Chaotic Map And Dna Code In this paper, we propose a novel medical image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and a new hybrid chaotic map. Our study uses DNA sequences and operations and the chaotic hybrid map to strengthen the cryptosystem. The significant advantages of this approach consist in improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext, has a large key space and a high key sensitivity, and is able to resist against the statistical and exhaustive attacks.
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Assist-As-Needed Training Paradigms For Robotic Rehabilitation Of Spinal Cord Injuries This paper introduces a new "assist-as-needed" (AAN) training paradigm for rehabilitation of spinal cord injuries via robotic training devices. In the pilot study reported in this paper, nine female adult Swiss-Webster mice were divided into three groups, each experiencing a different robotic training control strategy: a fixed training trajectory (Fixed Group, A), an AAN training method without interlimb coordination (Band Group, B), and an AAN training method with bilateral hindlimb coordination (Window Group, C). Fourteen days after complete transection at the mid-thoracic level, the mice were robotically trained to step in the presence of an acutely administered serotonin agonist, quipazine, for a period of six weeks. The mice that received AAN training (Groups B and C) show higher levels of recovery than Group A mice, as measured by the number, consistency, and periodicity of steps realized during testing sessions. Group C displays a higher incidence of alternating stepping than Group B. These results indicate that this training approach may be more effective than fixed trajectory paradigms in promoting robust post-injury stepping behavior. Furthermore, the constraint of interlimb coordination appears to be an important contribution to successful training.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
Dynamic Wi-Fi fingerprinting indoor positioning system In this paper, a technique is proposed to improve the accuracy of indoor positioning systems based on Wi-Fi radio-frequency signals by using dynamic access points and fingerprints (DAFs). Moreover, an indoor position system that relies solely in DAFs is proposed. The walking pattern of indoor users is classified as dynamic or static for indoor positioning purposes. We demonstrate that the performance of a conventional indoor positioning system that uses static fingerprints can be enhanced by considering dynamic fingerprints and access points. The accuracy of the system is evaluated using four positioning algorithms and one access point selection strategy. The system facilitates the location of people where there is no wireless local area network (WLAN) infrastructure deployed or where the WLAN infrastructure has been drastically affected, for example by natural disasters. The system can be used for search and rescue operations and for expanding the coverage of an indoor positioning system.
Indoor Fingerprint Positioning Based on Wi-Fi: An Overview. The widely applied location-based services require a high standard for positioning technology. Currently, outdoor positioning has been a great success; however, indoor positioning technologies are in the early stages of development. Therefore, this paper provides an overview of indoor fingerprint positioning based on Wi-Fi. First, some indoor positioning technologies, especially the Wi-Fi fingerprint indoor positioning technology, are introduced and discussed. Second, some evaluation metrics and influence factors of indoor fingerprint positioning technologies based on Wi-Fi are introduced. Third, methods and algorithms of fingerprint indoor positioning technologies are analyzed, classified, and discussed. Fourth, some widely used assistive positioning technologies are described. Finally, conclusions are drawn and future possible research interests are discussed. It is hoped that this research will serve as a stepping stone for those interested in advancing indoor positioning.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Indoor Positioning in Large Shopping Mall with Context based Map Matching This paper focus on large indoor environment and proposes an accurate indoor positioning system with context based map matching. The proposed system adopts the widely used smartphone as the positioning platform of the customer, and mainly depends on the motion sensors in a smartphone. The proposed system firstly provides the initial trajectory of the customer with Pedestrian Dead Reckoning (PDR) and recognize various human activities which are meaningful for the localization. In this paper, we divided the activities into two types: (1) transition activity between floors, such as taking the escalator and elevator; (2) moving on the floor, such as walking outside store, shopping in store and tuning. The hierarchical Long Short-Term Memory (LSTM) Network based activity model is developed to recognize those actives. Secondly, those location-aware activities, PDR trajectories and 2.5D indoor map are integrated in a Hidden Markov Model (HMM) to conduct an accurate indoor positioning. Because the 2.5D map include the position information of indoor facilities such as escalators and each store, these positions information are used as the assistant information for conducting Context based Map Matching. The proposed method has 2.21-meter of positioning error mean and can achieve "shop level" performance.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these "Sybil attacks" is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
Probabilistic encryption A new probabilistic model of data encryption is introduced. For this model, under suitable complexity assumptions, it is proved that extracting any information about the cleartext from the cyphertext is hard on the average for an adversary with polynomially bounded computational resources. The proof holds for any message space with any probability distribution. The first implementation of this model is presented. The security of this implementation is proved under the interactability assumptin of deciding Quadratic Residuosity modulo composite numbers whose factorization is unknown.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Optimization Of Radio And Computational Resources For Energy Efficiency In Latency-Constrained Application Offloading Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Noninterference for a Practical DIFC-Based Operating System The Flume system is an implementation of decentralized information flow control (DIFC) at the operating system level. Prior work has shown Flume can be implemented as a practical extension to the Linux operating system, allowing real Web applications to achieve useful security guarantees. However, the question remains if the Flume system is actually secure. This paper compares Flume with other recent DIFC systems like Asbestos, arguing that the latter is inherently susceptible to certain wide-bandwidth covert channels, and proving their absence in Flume by means of a noninterference proof in the communicating sequential processes formalism.
Efficient and reliable low-power backscatter networks There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors. This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.
Internet of Things for Smart Cities The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Social Robots for (Second) Language Learning in (Migrant) Primary School Children Especially these days, innovation and support from technology to relieve pressure in education is highly urgent. This study tested the potential advantage of a social robot over a tablet in (second) language learning on performance, engagement, and enjoyment. Shortages in primary education call for new technology solutions. Previous studies combined robots with tablets, to compensate for robot’s limitations, however, this study applied direct human–robot interaction. Primary school children (N = 63, aged 4–6) participated in a 3-wave field experiment with story-telling exercises, either with a semi-autonomous robot (without tablet, using WOz) or a tablet. Results showed increased learning gains over time when training with a social robot, compared to the tablet. Children who trained with a robot were more engaged in the story-telling task and enjoyed it more. Robot’s behavioral style (social or neutral) hardly differed overall, however, seems to vary for high versus low educational abilities. While social robots need sophistication before being implemented in schools, our study shows the potential of social robots as tutors in (second) language learning.
1.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
0
Asymptotically Optimal Resource Block Allocation With Limited Feedback. Consider a channel allocation problem over a frequency-selective channel. There are <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$K$</tex-math></inline-formula> channels (frequency bands) and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$N$</tex-math></inline-formula> users such that <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$K=bN$</tex-math></inline-formula> for some positive integer <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$b$</tex-math></inline-formula> . We want to allocate <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$b$</tex-math></inline-formula> channels (or resource blocks) to each user. Due to the nature of the frequency-selective channel, each user considers some channels to be better than others. The optimal solution to this resource allocation problem can be computed using the Hungarian algorithm. However, this requires knowledge of the numerical value of all the channel gains, which makes this approach impractical for large networks. We suggest a suboptimal approach that only requires knowing what the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$M$</tex-math></inline-formula> -best channels of each user are. We find the minimal value of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$M$</tex-math></inline-formula> such that there exists an allocation where all the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$b$</tex-math></inline-formula> channels each user gets are among his <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$M$</tex-math></inline-formula> -best. This leads to the feedback of significantly less than one bit per user per channel. For a large class of fading distributions, including Rayleigh, Rician, m-Nakagami, and others, this suboptimal approach leads to both an asymptotically (in <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$K$</tex-math></inline-formula> ) optimal sum rate and an asymptotically optimal minimal rate. Our non-opportunistic approach achieves (asymptotically) full multiuser diversity as well as optimal fairness in contrast to all other limited feedback algorithms.
IoT-U: Cellular Internet-of-Things Networks Over Unlicensed Spectrum. In this paper, we consider an uplink cellular Internet-of-Things (IoT) network, where a cellular user (CU) can serve as the mobile data aggregator for a cluster of IoT devices. To be specific, the IoT devices can either transmit the sensory data to the base station (BS) directly by cellular communications, or first aggregate the data to a CU through machine-to-machine (M2M) communications before t...
Cognitive Capacity Harvesting Networks: Architectural Evolution Toward Future Cognitive Radio Networks. Cognitive radio technologies enable users to opportunistically access unused licensed spectrum and are viewed as a promising way to deal with the current spectrum crisis. Over the last 15 years, cognitive radio technologies have been extensively studied from algorithmic design to practical implementation. One pressing and fundamental problem is how to integrate cognitive radios into current wirele...
Smart Cities on Wheels: A Newly Emerging Vehicular Cognitive Capability Harvesting Network for Data Transportation. With the emergence of IoT and smart cities, wireless data traffic is exponentially increasing, motivating telecom operators to search for new solutions. In this article, we propose a solution based on the premise that vehicles are equipped with CR routers, specially designed powerful devices with agile communication interfaces, rich computing resources, and abundant storage space. To fully exploit...
Market-Based Model in CR-IoT: A Q-Probabilistic Multi-Agent Reinforcement Learning Approach The ever-increasing urban population and the corresponding material demands have brought unprecedented burdens to cities. To guarantee better QoS for citizens, smart cities leverage emerging technologies such as the Cognitive Radio Internet of Things (CR-IoT). However, resource allocation is a great challenge for CR-IoT, mainly because of the extremely numerous devices and users. Generally, the auction theory and game theory are applied to overcome the challenge. In this paper, we propose a multi-agent reinforcement learning (MARL) algorithm to learn the optimal resource allocation strategy in the oligopoly market model. Firstly, we model a multi-agent scenario with the primary users (PUs) as sellers and secondary users (SUs) as buyers. Then, we propose the Q-probabilistic multi-agent learning (QPML) and apply it to allocate resources in the market. In the multi-agent learning process, the PUs and SUs learn strategies to maximize their benefits and improve spectrum utilization. The performance of QPML is compared with Learning Automation (LA) through simulations. The experimental results show that our approach outperforms other approaches and performs well.
Energy Minimization of Multi-cell Cognitive Capacity Harvesting Networks with Neighbor Resource Sharing In this paper, we investigate the energy minimization problem for a cognitive capacity harvesting network (CCHN), where secondary users (SUs) without cognitive radio (CR) capability communicate with CR routers via device-to-device (D2D) transmissions, and CR routers connect with base stations (BSs) via CR links. Different from traditional D2D networks that D2D transmissions share the resource of c...
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
ImageNet Classification with Deep Convolutional Neural Networks. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Collaborative privacy management The landscape of the World Wide Web with all its versatile services heavily relies on the disclosure of private user information. Unfortunately, the growing amount of personal data collected by service providers poses a significant privacy threat for Internet users. Targeting growing privacy concerns of users, privacy-enhancing technologies emerged. One goal of these technologies is the provision of tools that facilitate a more informative decision about personal data disclosures. A famous PET representative is the PRIME project that aims for a holistic privacy-enhancing identity management system. However, approaches like the PRIME privacy architecture require service providers to change their server infrastructure and add specific privacy-enhancing components. In the near future, service providers are not expected to alter internal processes. Addressing the dependency on service providers, this paper introduces a user-centric privacy architecture that enables the provider-independent protection of personal data. A central component of the proposed privacy infrastructure is an online privacy community, which facilitates the open exchange of privacy-related information about service providers. We characterize the benefits and the potentials of our proposed solution and evaluate a prototypical implementation.
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Next place prediction using mobility Markov chains In this paper, we address the issue of predicting the next location of an individual based on the observations of his mobility behavior over some period of time and the recent locations that he has visited. This work has several potential applications such as the evaluation of geo-privacy mechanisms, the development of location-based services anticipating the next movement of a user and the design of location-aware proactive resource migration. In a nutshell, we extend a mobility model called Mobility Markov Chain (MMC) in order to incorporate the n previous visited locations and we develop a novel algorithm for next location prediction based on this mobility model that we coined as n-MMC. The evaluation of the efficiency of our algorithm on three different datasets demonstrates an accuracy for the prediction of the next location in the range of 70% to 95% as soon as n = 2.
Location awareness through trajectory prediction Location-aware computing is a type of ubiquitous computing that uses user’s location information as an essential parameter for providing services and application-related optimization. Location management plays an important role in location-aware computing because the provision of services requires convenient access to dynamic location and location-dependent information. Many existing location management strategies are passive since they rely on system capability to periodically record current location information. In contrast, active strategies predict user movement through trajectories and locations. Trajectory prediction provides richer location and context information and facilitates the means for adapting to future locations. In this paper, we present two models for trajectory prediction, namely probability-based model and learning-based model. We analyze these two models and conduct experiments to test their performances in location-aware systems.
Solving the data sparsity problem in destination prediction Destination prediction is an essential task for many emerging location-based applications such as recommending sightseeing places and targeted advertising according to destinations. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, almost all the existing techniques use various kinds of extra information such as road network, proprietary travel planner, statistics requested from government, and personal driving habits. Such extra information, in most circumstances, is unavailable or very costly to obtain. Thereby we approach the task of destination prediction by using only historical trajectory dataset. However, this approach encounters the \"data sparsity problem\", i.e., the available historical trajectories are far from enough to cover all possible query trajectories, which considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) to address the data sparsity problem. SubSyn first decomposes historical trajectories into sub-trajectories comprising two adjacent locations, and then connects the sub-trajectories into \"synthesised\" trajectories. This process effectively expands the historical trajectory dataset to contain much more trajectories. Experiments based on real datasets show that SubSyn can predict destinations for up to ten times more query trajectories than a baseline prediction algorithm. Furthermore, the running time of the SubSyn-training algorithm is almost negligible for a large set of 1.9 million trajectories, and the SubSyn-prediction algorithm runs over two orders of magnitude faster than the baseline prediction algorithm constantly.
Time-Location-Relationship Combined Service Recommendation Based on Taxi Trajectory Data. Recently, urban traffic management has encountered a paradoxical situation which is the empty carrying phenomenon for taxi drivers and the difficulty of taking a taxi for passengers. In this paper, through analyzing the quantitative relationship between passengers&#39; getting on and off taxis, we propose a time-location-relationship (TLR) combined taxi service recommendation model to improve taxi dri...
Geography-Aware Sequential Location Recommendation Sequential location recommendation plays an important role in many applications such as mobility prediction, route planning and location-based advertisements. In spite of evolving from tensor factorization to RNN-based neural networks, existing methods did not make effective use of geographical information and suffered from the sparsity issue. To this end, we propose a Geography-aware sequential recommender based on the Self-Attention Network (GeoSAN for short) for location recommendation. On the one hand, we propose a new loss function based on importance sampling for optimization, to address the sparsity issue by emphasizing the use of informative negative samples. On the other hand, to make better use of geographical information, GeoSAN represents the hierarchical gridding of each GPS point with a self-attention based geography encoder. Moreover, we put forward geography-aware negative samplers to promote the informativeness of negative samples. We evaluate the proposed algorithm with three real-world LBSN datasets, and show that GeoSAN outperforms the state-of-the-art sequential location recommenders by 34.9%. The experimental results further verify significant effectiveness of the new loss function, geography encoder, and geography-aware negative samplers.
Deep Learning Methods for Vessel Trajectory Prediction Based on Recurrent Neural Networks Data-driven methods open up unprecedented possibilities for maritime surveillance using automatic identification system (AIS) data. In this work, we explore deep learning strategies using historical AIS observations to address the problem of predicting future vessel trajectories with a prediction horizon of several hours. We propose novel sequence-to-sequence vessel trajectory prediction models ba...
Testing Scenario Library Generation for Connected and Automated Vehicles, Part I: Methodology AbstractTesting and evaluation is a critical step in the development and deployment of connected and automated vehicles (CAVs), and yet there is no systematic framework to generate testing scenario library. This study aims to provide a general framework for the testing scenario library generation (TSLG) problem with different operational design domains (ODDs), CAV models, and performance metrics. Given an ODD, the testing scenario library is defined as a critical set of scenarios that can be used for CAV test. Each testing scenario is evaluated by a newly proposed measure, scenario criticality, which can be computed as a combination of maneuver challenge and exposure frequency. To search for critical scenarios, an auxiliary objective function is designed, and a multi-start optimization method along with seed-filling is applied. Theoretical analysis suggests that the proposed framework can obtain accurate evaluation results with much fewer number of tests, if compared with the on-road test method. In part II of the study, three case studies are investigated to demonstrate the proposed method. Reinforcement learning based technique is applied to enhance the searching method under high-dimensional scenarios.
Applied research of data sensing and service to ubiquitous intelligent transportation system High-efficiency transportation systems in urban environments are not only solutions for the growing public travel demands, but are also the premise for enlarging transportation capacity and narrowing the gap between urban and rural areas. Such transportation systems should have characteristics such as mobility, convenience and being accident-free. Ubiquitous-intelligent transportation systems (U-ITS) are next generation of intelligent transportation system (ITS). The key issue of U-ITS is providing better and more efficient services by providing vehicle to vehicle (V2V) or vehicle to infrastructure (V2I) interconnection. The emergence of cyber physical systems (CPS), which focus on information awareness technologies, provides technical assurance for the rapid development of U-ITS. This paper introduces the ongoing Beijing U-ITS project, which utilizes mobile sensors. Realization of universal interconnection between real-time information systems and large-scale detectors allows the system to maximize equipment efficiency and improve transportation efficiency through information services.
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
A General Equilibrium Model for Industries with Price and Service Competition This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers' prices and all retailers' service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty.We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms' service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game.We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service.
Mobile cloud computing: A survey Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for future work.
Charge selection algorithms for maximizing sensor network life with UAV-based limited wireless recharging Monitoring bridges with wireless sensor networks aids in detecting failures early, but faces power challenges in ensuring reasonable network lifetimes. Recharging select nodes with Unmanned Aerial Vehicles (UAVs) provides a solution that currently can recharge a single node. However, questions arise on the effectiveness of a limited recharging system, the appropriate node to recharge, and the best sink selection algorithm for improving network lifetime given a limited recharging system. This paper simulates such a network in order to answer those questions. It explores five different sink positioning algorithms to find which provides the longest network lifetime with the added capability of limited recharging. For a range of network sizes, our results show that network lifetime improves by over 350% when recharging a single node in the network, the best node to recharge is the one with the lowest power level, and that either the Greedy Heuristic or LP sink selection algorithms perform equally well.
Safe mutations for deep and recurrent neural networks through output gradients While neuroevolution (evolving neural networks) has been successful across a variety of domains from reinforcement learning, to artificial life, to evolutionary robotics, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights will likely break existing functionality. This paper proposes a solution: a family of safe mutation (SM) operators that facilitate exploration without dramatically altering network behavior or requiring additional interaction with the environment. The most effective SM variant scales the degree of mutation of each individual weight according to the sensitivity of the network's outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks, including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution.
A Hierarchical Architecture Using Biased Min-Consensus for USV Path Planning This paper proposes a hierarchical architecture using the biased min-consensus (BMC) method, to solve the path planning problem of unmanned surface vessel (USV). We take the fixed-point monitoring mission as an example, where a series of intermediate monitoring points should be visited once by USV. The whole framework incorporates the low-level layer planning the standard path between any two intermediate points, and the high-level fashion determining their visiting sequence. First, the optimal standard path in terms of voyage time and risk measure is planned by the BMC protocol, given that the corresponding graph is constructed with node state and edge weight. The USV will avoid obstacles or keep a certain distance safely, and arrive at the target point quickly. It is proven theoretically that the state of the graph will converge to be stable after finite iterations, i.e., the optimal solution can be found by BMC with low calculation complexity. Second, by incorporating the constraint of intermediate points, their visiting sequence is optimized by BMC again with the reconstruction of a new virtual graph based on the former planned results. The extensive simulation results in various scenarios also validate the feasibility and effectiveness of our method for autonomous navigation.
1.075556
0.066667
0.066667
0.066667
0.066667
0.033333
0.008889
0.001111
0
0
0
0
0
0
Fixed-Structure Lpv Discrete-Time Controller Design With Induced L(2)-Norm And H-2 Performance A new method for the design of fixed-structure dynamic output-feedback linear parameter-varying (LPV) controllers for discrete-time LPV systems with bounded scheduling parameter variations is presented. Sufficient conditions for the stability, H-2 and induced l(2)-norm performance of a given LPV system are represented through a set of linear matrix inequalities (LMIs). These LMIs are used in an iterative algorithm with monotonic convergence for LPV controller design. Extension to the case of uncertain scheduling parameter value is considered as well. Controller parameters appear directly as decision variables in the optimisation program, which enables preserving a desired controller structure in addition to the low order. Efficiency of the proposed method is illustrated on a simulation example, with an iterative convex optimisation scheme used for the improvement of the control system performance.
Robust Fault Detection With Missing Measurements This paper investigates the problem of robust fault detection for uncertain systems with missing measurements. The parameter uncertainty is assumed to be of polytopic type, and the measurement missing phenomenon, which appears typically in a network environment, is modelled by a stochastic variable satisfying the Bernoulli random binary distribution. The focus is on the design of a robust fault detection filter, or a residual generation system, which is stochastically stable and satisfies a prescribed disturbance attenuation level. This problem is solved in the parameter-dependent framework, which is much less conservative than the quadratic approach. Both full-order and reduced-order designs are considered, and formulated via linear matrix inequality (LMI) based convex optimization problems, which can be efficiently solved via standard numerical software. A continuous-stirred tank reactor (CSTR) system is utilized to illustrate the design procedures.
Sliding mode control for uncertain discrete-time systems with Markovian jumping parameters and mixed delays This paper is concerned with the robust sliding mode control (SMC) problem for a class of uncertain discrete-time Markovian jump systems with mixed delays. The mixed delays consist of both the discrete time-varying delays and the infinite distributed delays. The purpose of the addressed problem is to design a sliding mode controller such that, in the simultaneous presence of parameter uncertainties, Markovian jumping parameters and mixed time-delays, the state trajectories are driven onto the pre-defined sliding surface and the resulting sliding mode dynamics is stochastically stable in the mean-square sense. A discrete-time sliding surface is firstly constructed and an SMC law is synthesized to ensure the reaching condition. Moreover, by constructing a new Lyapunov–Krasovskii functional and employing the delay-fractioning approach, a sufficient condition is established to guarantee the stochastic stability of the sliding mode dynamics. Such a condition is characterized in terms of a set of matrix inequalities that can be easily solved by using the semi-definite programming method. A simulation example is given to illustrate the effectiveness and feasibility of the proposed design scheme.
Automated Multiple Robust Track-Following Control System Design in Hard Disk Drives This brief proposes a new design procedure for track-following control systems in hard disk drives. The procedure is automated, in the sense that, for given experimental frequency response data of the suspension arm dynamics and a model structure, it automatically derives a transfer function set with uncorrelated parametric uncertainties. Subsequently, for the transfer function set, a given controller structure, and closed-loop performance specifications in the frequency domain, it automatically designs a partition of the uncertainties and corresponding multiple robust controllers. For the transfer function set derivation, nonlinear principal component analysis is utilized to determine correlations among coefficient parameter variations. For multiple robust controller design, a nonsmooth optimization approach is taken to deal with complex multiobjective control problems, as well as to reduce the computational cost, which is often an issue in multiple robust controller design. Simulations and experiments on actual hard disk drives demonstrate the usefulness and efficiency of the proposed procedure.
A parameter set division and switching gain-scheduling controllers design method for time-varying plants. This paper presents a new technique to design switching gain-scheduling controllers for plants with measurable time-varying parameters. By dividing the parameter set into a sufficient number of subsets, and by designing a robust controller to each subset, the designed switching gain-scheduling controllers achieve a desired L2-gain performance for each subset, while ensuring stability whenever a controller switching occurs due to the crossing of the time-varying parameters between any two adjacent subsets. Based on integral quadratic constraints theory and Lyapunov stability theory, a switching gain-scheduling controllers design problem amounts to solving optimization problems. Each optimization problem is to be solved by a combination of the bisection search and the numerical nonsmooth optimization method. The main advantage of the proposed technique is that the division of the parameter region is determined automatically, without any prespecified parameter set division which is required in most of previously developed switching gain-scheduling controllers design methods. A numerical example illustrates the validity of the proposed technique.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Efficient Signature Generation by Smart Cards We present a new public-key signature scheme and a corresponding authentication scheme that are based on discrete logarithms in a subgroup of units in Zp where p is a sufficiently large prime, e.g., p = 2512. A key idea is to use for the base of the discrete logarithm an integer a in Zp such that the order of a is a sufficiently large prime q, e.g., q = 2140. In this way we improve the ElGamal signature scheme in the speed of the procedures for the generation and the verification of signatures and also in the bit length of signatures. We present an efficient algorithm that preprocesses the exponentiation of a random residue modulo p.
Stabilizing a linear system by switching control with dwell time The use of networks in control systems to connect controllers and sensors/actuators has become common practice in many applications. This new technology has also posed a theoretical control problem of how to use the limited data rate of the network effectively. We consider a system where its sensor and actuator are connected by a finite data rate channel. A design method to stabilize a continuous-time, linear plant using a switching controller is proposed. In particular, to prevent the actuator from fast switching, or chattering, which can not only increase the necessary data rate but also damage the system, we employ a dwell-time switching scheme. It is shown that a systematic partition of the state-space enables us to reduce the complexity of the design problem
Effects of robotic knee exoskeleton on human energy expenditure. A number of studies discuss the design and control of various exoskeleton mechanisms, yet relatively few address the effect on the energy expenditure of the user. In this paper, we discuss the effect of a performance augmenting exoskeleton on the metabolic cost of an able-bodied user/pilot during periodic squatting. We investigated whether an exoskeleton device will significantly reduce the metabolic cost and what is the influence of the chosen device control strategy. By measuring oxygen consumption, minute ventilation, heart rate, blood oxygenation, and muscle EMG during 5-min squatting series, at one squat every 2 s, we show the effects of using a prototype robotic knee exoskeleton under three different noninvasive control approaches: gravity compensation approach, position-based approach, and a novel oscillator-based approach. The latter proposes a novel control that ensures synchronization of the device and the user. Statistically significant decrease in physiological responses can be observed when using the robotic knee exoskeleton under gravity compensation and oscillator-based control. On the other hand, the effects of position-based control were not significant in all parameters although all approaches significantly reduced the energy expenditure during squatting.
Internet of Things for Smart Cities The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Generalized Lyapunov approach for functional differential inclusions. This paper is concerned with the generalized Lyapunov approach for functional differential inclusions (FDI). At first, we prove a class of generalized Halanay’s inequalities. Then, by applying the generalization of Halanay’s inequalities, we investigate the uniformly ultimate boundedness and uniformly asymptotic stability of FDI.
Adaptive Backstepping Control of Nonlinear Uncertain Systems with Quantized States This paper investigates the stabilization problem for uncertain nonlinear systems with quantized states. All states in the system are quantized by a static bounded quantizer, including uniform quantizer, hysteresis-uniform quantizer, and logarithmic-uniform quantizer as examples. An adaptive backstepping-based control algorithm, which can handle discontinuity, resulted from the state quantization ...
Adaptive Neural Quantized Control for a Class of MIMO Switched Nonlinear Systems With Asymmetric Actuator Dead-Zone. This paper concentrates on the adaptive state-feedback quantized control problem for a class of multiple-input-multiple-output (MIMO) switched nonlinear systems with unknown asymmetric actuator dead-zone. In this study, we employ different quantizers for different subsystem inputs. The main challenge of this study is to deal with the coupling between the quantizers and the dead-zone nonlinearities...
Neural-Network Approximation-Based Adaptive Periodic Event-Triggered Output-Feedback Control of Switched Nonlinear Systems This study considers an adaptive neural-network (NN) periodic event-triggered control (PETC) problem for switched nonlinear systems (SNSs). In the system, only the system output is available at sampling instants. A novel adaptive law and a state observer are constructed by using only the sampled system output. A new output-feedback adaptive NN PETC strategy is developed to reduce the usage of communication resources; it includes a controller that only uses event-sampling information and an event-triggering mechanism (ETM) that is only intermittently monitored at sampling instants. The proposed adaptive NN PETC strategy does not need restrictions on nonlinear functions reported in some previous studies. It is proven that all states of the closed-loop system (CLS) are semiglobally uniformly ultimately bounded (SGUUB) under arbitrary switchings by choosing an allowable sampling period. Finally, the proposed scheme is applied to a continuous stirred tank reactor (CSTR) system and a numerical example to verify its effectiveness.
Adaptive Asymptotic Tracking With Global Performance for Nonlinear Systems With Unknown Control Directions This article presents a global adaptive asymptotic tracking control method, capable of guaranteeing prescribed transient behavior for uncertain strict-feedback nonlinear systems with arbitrary relative degree and unknown control directions. Unlike most existing funnel controls that are built upon time-varying feedback gains, the proposed method is derived from a tracking error-dependent normalized...
Event-Triggered Fuzzy Bipartite Tracking Control for Network Systems Based on Distributed Reduced-Order Observers This article studies the distributed observer-based event-triggered bipartite tracking control problem for stochastic nonlinear multiagent systems with input saturation. First, different from conventional observers, we construct a novel distributed reduced-order observer to estimate unknown states for the stochastic nonlinear systems. Then, an event-triggered mechanism with relative threshold is i...
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Collaborative privacy management The landscape of the World Wide Web with all its versatile services heavily relies on the disclosure of private user information. Unfortunately, the growing amount of personal data collected by service providers poses a significant privacy threat for Internet users. Targeting growing privacy concerns of users, privacy-enhancing technologies emerged. One goal of these technologies is the provision of tools that facilitate a more informative decision about personal data disclosures. A famous PET representative is the PRIME project that aims for a holistic privacy-enhancing identity management system. However, approaches like the PRIME privacy architecture require service providers to change their server infrastructure and add specific privacy-enhancing components. In the near future, service providers are not expected to alter internal processes. Addressing the dependency on service providers, this paper introduces a user-centric privacy architecture that enables the provider-independent protection of personal data. A central component of the proposed privacy infrastructure is an online privacy community, which facilitates the open exchange of privacy-related information about service providers. We characterize the benefits and the potentials of our proposed solution and evaluate a prototypical implementation.
On controller initialization in multivariable switching systems We consider a class of switched systems which consists of a linear MIMO and possibly unstable process in feedback interconnection with a multicontroller whose dynamics switch. It is shown how one can achieve significantly better transient performance by selecting the initial condition for every controller when it is inserted into the feedback loop. This initialization is obtained by performing the minimization of a quadratic cost function of the tracking error, controlled output, and control signal. We guarantee input-to-state stability of the closed-loop system when the average number of switches per unit of time is smaller than a specific value. If this is not the case then stability can still be achieved by adding a mild constraint to the optimization. We illustrate the use of our results in the control of a flexible beam actuated in torque. This system is unstable with two poles at the origin and contains several lightly damped modes, which can be easily excited by controller switching.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A robust medical image watermarking against salt and pepper noise for brain MRI images. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. During the transmission of medical images between hospitals or specialists through the network, the main priority is to protect a patient's documents against any act of tampering by unauthorised individuals. Because of this, there is a need for medical image authentication scheme to enable proper diagnosis on patient. In addition, medical images are also susceptible to salt and pepper impulse noise through the transmission in communication channels. This noise may also be intentionally used by the invaders to corrupt the embedded watermarks inside the medical images. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this work addresses the issue of designing a new watermarking method that can withstand high density of salt and pepper noise for brain MRI images. For this purpose, combination of a spatial domain watermarking method, channel coding and noise filtering schemes are used. The region of non-interest (RONI) of MRI images from five different databases are used as embedding area and electronic patient record (EPR) is considered as embedded data. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER).
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
0
0
Event-Triggered Fuzzy Adaptive Leader-Following Tracking Control of Nonaffine Multiagent Systems With Finite-Time Output Constraint and Input Saturation This article considers the problem of distributed adaptive fuzzy event-based finite-time prescribed performance leader-following tracking control for heterogeneous nonlinear multiagent systems (NMASs) over a directed topology. Each agent is considered in a nonaffine nonstrict-feedback form under input saturation and output constraint which contains unknown dynamics and external disturbances. Fuzzy...
Fuzzy Adaptive Tracking Control of Wheeled Mobile Robots With State-Dependent Kinematic and Dynamic Disturbances Unlike most works based on pure nonholonomic constraint, this paper proposes a fuzzy adaptive tracking control method for wheeled mobile robots, where unknown slippage occurs and violates the nonholononomic constraint in the form of state-dependent kinematic and dynamic disturbances. These disturbances degrade tracking performance significantly and, therefore, should be compensated. To this end, the kinematics with state-dependent disturbances are rigorously derived based on the general form of slippage in the mobile robots, and fuzzy adaptive observers together with parameter adaptation laws are designed to estimate the state-dependent disturbances in both kinematics and dynamics. Because of the modular structure of the proposed method, it can be easily combined with the previous controllers based on the model with the pure nonholonomic constraint, such that the combination of the fuzzy adaptive observers with the previously proposed backstepping-like feedback linearization controller can guarantee the trajectory tracking errors to be globally ultimately bounded, even when the nonholonomic constraint is violated, and their ultimate bounds can be adjusted appropriately for various types of trajectories in the presence of large initial tracking errors and disturbances. Both the stability analysis and simulation results are provided to validate the proposed controller.
Leader-following consensus in second-order multi-agent systems with input time delay: An event-triggered sampling approach. This paper analytically investigates an event-triggered leader-following consensus in second-order multi-agent systems with time delay in the control input. Each agent׳s update of control input is driven by properly defined event, which depends on the measurement error, the states of its neighboring agents at their individual time instants, and an exponential decay function. Necessary and sufficient conditions are presented to ensure a leader-following consensus. Moreover, the control is updated only when the event-triggered condition is satisfied, which significantly decreases the number of communication among nodes, avoided effectively the continuous communication of the information channel among agents and excluded the Zeno-behavior of triggering time sequences. A numerical simulation example is given to illustrate the theoretical results.
Adaptive neural control for a class of stochastic nonlinear systems by backstepping approach. This paper addresses adaptive neural control for a class of stochastic nonlinear systems which are not in strict-feedback form. Based on the structural characteristics of radial basis function (RBF) neural networks (NNs), a backstepping design approach is extended from stochastic strict-feedback systems to a class of more general stochastic nonlinear systems. In the control design procedure, RBF NNs are used to approximate unknown nonlinear functions and the backstepping technique is utilized to construct the desired controller. The proposed adaptive neural controller guarantees that all the closed-loop signals are bounded and the tracking error converges to a sufficiently small neighborhood of the origin. Two simulation examples are used to illustrate the effectiveness of the proposed approach.
Prescribed Performance Adaptive Fuzzy Containment Control for Nonlinear Multiagent Systems Using Disturbance Observer This article focuses on the containment control problem for nonlinear multiagent systems (MASs) with unknown disturbance and prescribed performance in the presence of dead-zone output. The fuzzy-logic systems (FLSs) are used to approximate the unknown nonlinear function, and a nonlinear disturbance observer is used to estimate unknown external disturbances. Meanwhile, a new distributed containment control scheme is developed by utilizing the adaptive compensation technique without assumption of the boundary value of unknown disturbance. Furthermore, a Nussbaum function is utilized to cope with the unknown control coefficient, which is caused by the nonlinearity in the output mechanism. Moreover, a second-order tracking differentiator (TD) is introduced to avoid the repeated differentiation of the virtual controller. The outputs of the followers converge to the convex hull spanned by the multiple dynamic leaders. It is shown that all the signals are semiglobally uniformly ultimately bounded (SGUUB), and the local neighborhood containment errors can converge into the prescribed boundary. Finally, the effectiveness of the approach proposed in this article is illustrated by simulation results.
Adaptive Event-Triggered Control of Uncertain Nonlinear Systems Using Intermittent Output Only Although rich collection of research results on event-triggered control exist, no effort has ever been made in integrating state/output triggering and controller triggering simultaneously with backstepping control design. The primary objective of this article is, by using intermittent output signal only, to build a backstepping adaptive event-triggered feedback control for a class of uncertain nonlinear systems. To do so, we need to tackle three technical obstacles. First, the nature of the event triggering makes the transmitted output signal discontinuous, rendering the regular recursive backstepping design method inapplicable as the repetitive differentiation of virtual control signals is literally undefined. Second, the effects arisen from the event-triggering action must be properly accommodated, but the current compensating method only works for systems in normal form, thus a new method needs to be developed in order to handle nonnormal form systems. Third, as only intermittent output signal is available, and at the same time, the impacts of certain terms containing unknown parameters (arising from event triggering) need to be compensated, it is rather challenging to design a suitable state observer. To circumvent these difficulties, we employ the dynamic filtering technique to avoid the differentiation of virtual control signals in the backstepping design, construct a new compensation scheme to deal with the effects of output triggering, and build a new form of state observer to facilitate the development of output feedback control. It is shown that, with the derived adaptive backstepping output-triggered control, all the closed-loop signals are ensured bounded and the transient system performance in the mean square error sense can be adjusted by appropriately adjusting design parameters. The benefits and effectiveness of the proposed scheme are also validated by numerical simulation.
Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN...
Massive MIMO for next generation wireless systems Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
Adaptive Federated Learning in Resource Constrained Edge Computing Systems Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a cen...
A new optimization method: big bang-big crunch Nature is the principal source for proposing new optimization methods such as genetic algorithms (GA) and simulated annealing (SA) methods. All traditional evolutionary algorithms are heuristic population-based search procedures that incorporate random variation and selection. The main contribution of this study is that it proposes a novel optimization method that relies on one of the theories of the evolution of the universe; namely, the Big Bang and Big Crunch Theory. In the Big Bang phase, energy dissipation produces disorder and randomness is the main feature of this phase; whereas, in the Big Crunch phase, randomly distributed particles are drawn into an order. Inspired by this theory, an optimization algorithm is constructed, which will be called the Big Bang-Big Crunch (BB-BC) method that generates random points in the Big Bang phase and shrinks those points to a single representative point via a center of mass or minimal cost approach in the Big Crunch phase. It is shown that the performance of the new (BB-BC) method demonstrates superiority over an improved and enhanced genetic search algorithm also developed by the authors of this study, and outperforms the classical genetic algorithm (GA) for many benchmark test functions.
Secure and privacy preserving keyword searching for cloud storage services Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.
A review on interval type-2 fuzzy logic applications in intelligent control. A review of the applications of interval type-2 fuzzy logic in intelligent control has been considered in this paper. The fundamental focus of the paper is based on the basic reasons for using type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques.
Design of robust fuzzy fault detection filter for polynomial fuzzy systems with new finite frequency specifications This paper investigates the problem of fault detection filter design for discrete-time polynomial fuzzy systems with faults and unknown disturbances. The frequency ranges of the faults and the disturbances are assumed to be known beforehand and to reside in low, middle or high frequency ranges. Thus, the proposed filter is designed in the finite frequency range to overcome the conservatism generated by those designed in the full frequency domain. Being of polynomial fuzzy structure, the proposed filter combines the H−/H∞ performances in order to ensure the best robustness to the disturbance and the best sensitivity to the fault. Design conditions are derived in Sum Of Squares formulations that can be easily solved via available software tools. Two illustrative examples are introduced to demonstrate the effectiveness of the proposed method and a comparative study with LMI method is also provided.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
0
Resilient Predictive Control for Cyber–Physical Systems Under Denial-of-Service Attacks In this brief, we investigate the Cyber-Physical Systems (CPSs) with measurement channels under Denial-of-Service (DoS) attacks. An equivalent predictive control system model and attack model are constructed firstly. To compensate for the data loss due to the attacks, the predictive controller is designed based on linear quadratic form. Furthermore, through the design of an observer-based output p...
Input-to-State Stabilizing Control under Denial-of-Service The issue of cyber-security has become ever more prevalent in the analysis and design of networked systems. In this paper, we analyze networked control systems in the presence of Denial-of-Service (DoS) attacks, namely attacks that prevent transmissions over the network. We characterize frequency and duration of the DoS attacks under which input-to-state stability (ISS) of the closed-loop system can be preserved. To achieve ISS, a suitable scheduling of the transmission times is determined. It is shown that the considered framework is flexible enough so as to allow the designer to choose from several implementation options that can be used for trading-off performance vs. communication resources. Examples are given to substantiate the analysis.
Survey on Recent Advances in Networked Control Systems. Networked control systems (NCSs) are systems whose control loops are closed through communication networks such that both control signals and feedback signals can be exchanged among system components (sensors, controllers, actuators, and so on). NCSs have a broad range of applications in areas such as industrial control and signal processing. This survey provides an overview on the theoretical dev...
Neural network-based event-triggered MFAC for nonlinear discrete-time processes. This paper is concerned with the event-triggered data-driven control problem for nonlinear discrete-time systems. An event-based data-driven model-free adaptive controller design algorithm together with constructing an adaptive event-trigger condition is developed. Different from the existing data-driven model-free adaptive control approach, an aperiodic neural network weight update law is introduced to estimate the controller parameters, and the event-trigger mechanism is activated only if the event-trigger error exceeds the threshold. Furthermore, by combining the equivalent-dynamic-linearization technique with the Lyapunov method, it is proved that both the closed-loop control system and the weight estimation error are ultimately bounded. Finally, two simulation examples are provided to demonstrate the effectiveness of the derived method.
Observer-based Fuzzy Adaptive Inverse Optimal Output Feedback Control for Uncertain Nonlinear Systems In this article, an observer-based fuzzy adaptive inverse optimal output feedback control problem is studied for a class of nonlinear systems in strict-feedback form. The considered nonlinear systems contain unknown nonlinear dynamics and their states are not measured directly. Fuzzy logic systems are applied to identify the unknown nonlinear dynamics and an auxiliary nonlinear system is construct...
Observer-Based Event-Triggered Containment Control for MASs Under DoS Attacks This article studies the observer-based event-triggered containment control problem for linear multiagent systems (MASs) under denial-of-service (DoS) attacks. In order to deal with situations where MASs states are unmeasurable, an improved separation method-based observer design method with less conservativeness is proposed to estimate MASs states. To save communication resources and achieve the containment control objective, a novel observer-based event-triggered containment controller design method based on observer states is proposed for MASs under the influence of DoS attacks, which can make the MASs resilient to DoS attacks. In addition, the Zeno behavior can be eliminated effectively by introducing a positive constant into the designed event-triggered mechanism. Finally, a practical example is presented to illustrate the effectiveness of the designed observer and the event-triggered containment controller.
Event-Triggered Adaptive Control for a Class of Uncertain Nonlinear Systems. In this technical note, the problem of event-trigger based adaptive control for a class of uncertain nonlinear systems is considered. The nonlinearities of the system are not required to be globally Lipschitz. Since the system contains unknown parameters, it is a difficult task to check the assumption of the input-to-state stability (ISS) with respect to the measurement errors, which is required in most existing literature. To solve this problem, we design both the adaptive controller and the triggering event at the same time such that the ISS assumption is no longer needed. In addition to presenting new design methodologies based on the fixed threshold strategy and relative threshold strategy, we also propose a new strategy named the switching threshold strategy. It is shown that the proposed control schemes guarantee that all the closed-loop signals are globally bounded and the tracking/stabilization error exponentially converges towards a compact set which is adjustable.
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Mobile Edge Computing: A Survey. Mobile edge computing (MEC) is an emergent architecture where cloud computing services are extended to the edge of networks leveraging mobile base stations. As a promising edge technology, it can be applied to mobile, wireless, and wireline scenarios, using software and hardware platforms, located at the network edge in the vicinity of end-users. MEC provides seamless integration of multiple appli...
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
The industrial indoor channel: large-scale and temporal fading at 900, 2400, and 5200 MHz In this paper, large-scale fading and temporal fading characteristics of the industrial radio channel at 900, 2400, and 5200 MHz are determined. In contrast to measurements performed in houses and in office buildings, few attempts have been made until now to model propagation in industrial environments. In this paper, the industrial environment is categorized into different topographies. Industrial topographies are defined separately for large-scale and temporal fading, and their definition is based upon the specific physical characteristics of the local surroundings affecting both types of fading. Large-scale fading is well expressed by a one-slope path-loss model and excellent agreement with a lognormal distribution is obtained. Temporal fading is found to be Ricean and Ricean K-factors have been determined. Ricean K-factors are found to follow a lognormal distribution.
Placing Virtual Machines to Optimize Cloud Gaming Experience Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider's total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Flymap: Interacting With Maps Projected From A Drone Interactive maps have become ubiquitous in our daily lives, helping us reach destinations and discovering our surroundings. Yet, designing map interactions is not straightforward and depends on the device being used. As mobile devices evolve and become independent from users, such as with robots and drones, how will we interact with the maps they provide? We propose FlyMap as a novel user experience for drone-based interactive maps. We designed and developed three interaction techniques for FlyMap's usage scenarios. In a comprehensive indoor study (N = 16), we show the strengths and weaknesses of two techniques on users' cognition, task load, and satisfaction. FlyMap was then pilot tested with the third technique outdoors in real world conditions with four groups of participants (N = 13). We show that FlyMap's interactivity is exciting to users and opens the space for more direct interactions with drones.
A Hierarchical Architecture Using Biased Min-Consensus for USV Path Planning This paper proposes a hierarchical architecture using the biased min-consensus (BMC) method, to solve the path planning problem of unmanned surface vessel (USV). We take the fixed-point monitoring mission as an example, where a series of intermediate monitoring points should be visited once by USV. The whole framework incorporates the low-level layer planning the standard path between any two intermediate points, and the high-level fashion determining their visiting sequence. First, the optimal standard path in terms of voyage time and risk measure is planned by the BMC protocol, given that the corresponding graph is constructed with node state and edge weight. The USV will avoid obstacles or keep a certain distance safely, and arrive at the target point quickly. It is proven theoretically that the state of the graph will converge to be stable after finite iterations, i.e., the optimal solution can be found by BMC with low calculation complexity. Second, by incorporating the constraint of intermediate points, their visiting sequence is optimized by BMC again with the reconstruction of a new virtual graph based on the former planned results. The extensive simulation results in various scenarios also validate the feasibility and effectiveness of our method for autonomous navigation.
1.05
0.05
0.05
0.05
0.05
0.05
0.005
0
0
0
0
0
0
0
A Survey of Intelligent Network Slicing Management for Industrial IoT: Integrated Approaches for Smart Transportation, Smart Energy, and Smart Factory Network slicing has been widely agreed as a promising technique to accommodate diverse services for the Industrial Internet of Things (IIoT). Smart transportation, smart energy, and smart factory/manufacturing are the three key services to form the backbone of IIoT. Network slicing management is of paramount importance in the face of IIoT services with diversified requirements. It is important to have a comprehensive survey on intelligent network slicing management to provide guidance for future research in this field. In this paper, we provide a thorough investigation and analysis of network slicing management in its general use cases as well as specific IIoT services including smart transportation, smart energy and smart factory, and highlight the advantages and drawbacks across many existing works/surveys and this current survey in terms of a set of important criteria. In addition, we present an architecture for intelligent network slicing management for IIoT focusing on the above three IIoT services. For each service, we provide a detailed analysis of the application requirements and network slicing architecture, as well as the associated enabling technologies. Further, we present a deep understanding of network slicing orchestration and management for each service, in terms of orchestration architecture, AI-assisted management and operation, edge computing empowered network slicing, reliability, and security. For the presented architecture for intelligent network slicing management and its application in each IIoT service, we identify the corresponding key challenges and open issues that can guide future research. To facilitate the understanding of the implementation, we provide a case study of the intelligent network slicing management for integrated smart transportation, smart energy, and smart factory. Some lessons learnt include: 1) For smart transportation, it is necessary to explicitly identify service function chains (SFCs) for specific applications along with the orchestration of underlying VNFs/PNFs for supporting such SFCs; 2) For smart energy, it is crucial to guarantee both ultra-low latency and extremely high reliability; 3) For smart factory, resource management across heterogeneous network domains is of paramount importance. We hope that this survey is useful for both researchers and engineers on the innovation and deployment of intelligent network slicing management for IIoT.
A MTC traffic generation and QCI priority-first scheduling algorithm over LTE As (M2M) Machine-To-Machine, communication continues to grow rapidly, a full study on overload control approach to manage the data and signaling of H2H traffic from massive MTC devices is required. In this paper, a new M2M resource-scheduling algorithm for Long Term Evolution (LTE) is proposed. It provides Quality of Service (QoS) guarantee to Guaranteed Bit Rate (GBR) services, we set priorities for the critical M2M services to guarantee the transportation of GBR services, which have high QoS needs. Additionally, we simulate and compare different methods and offer further observations on the solution design.
On Service Resilience in Cloud-Native 5G Mobile Systems. To cope with the tremendous growth in mobile data traffic on one hand, and the modest average revenue per user on the other hand, mobile operators have been exploring network virtualization and cloud computing technologies to build cost-efficient and elastic mobile networks and to have them offered as a cloud service. In such cloud-based mobile networks, ensuring service resilience is an important challenge to tackle. Indeed, high availability and service reliability are important requirements of carrier grade, but not necessarily intrinsic features of cloud computing. Building a system that requires the five nines reliability on a platform that may not always grant it is, therefore, a hurdle. Effectively, in carrier cloud, service resilience can be heavily impacted by a failure of any network function (NF) running on a virtual machine (VM). In this paper, we introduce a framework, along with efficient and proactive restoration mechanisms, to ensure service resilience in carrier cloud. As restoration of a NF failure impacts a potential number of users, adequate network overload control mechanisms are also proposed. A mathematical model is developed to evaluate the performance of the proposed mechanisms. The obtained results are encouraging and demonstrate that the proposed mechanisms efficiently achieve their design goals.
Methodology for the Design and Evaluation of Self-Healing LTE Networks. Self-healing networks aim to detect cells with service degradation, identify the fault cause of their problem, and execute compensation and repair actions. The development of this type of automatic system presents several challenges to be confronted. The first challenge is the scarce number of historically reported faults, which greatly complicates the evaluation of novel self-healing techniques. ...
Intelligence and Learning in O-RAN for Data-Driven NextG Cellular Networks Next generation (NextG) cellular networks will be natively cloud-based and built on programmable, virtualized, and disaggregated architectures. The separation of control functions from the hardware fabric and the introduction of standardized control interfaces will enable the definition of custom closed-control loops, which will ultimately enable embedded intelligence and real-time analytics, thus...
Network Slicing and Softwarization: A Survey on Principles, Enabling Technologies, and Solutions. Network slicing has been identified as the backbone of the rapidly evolving 5G technology. However, as its consolidation and standardization progress, there are no literatures that comprehensively discuss its key principles, enablers, and research challenges. This paper elaborates network slicing from an end-to-end perspective detailing its historical heritage, principal concepts, enabling technol...
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Adam: A Method for Stochastic Optimization. We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
Untangling Blockchain: A Data Processing View of Blockchain Systems. Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core ...
Multivariate Short-Term Traffic Flow Forecasting Using Time-Series Analysis Existing time-series models that are used for short-term traffic condition forecasting are mostly univariate in nature. Generally, the extension of existing univariate time-series models to a multivariate regime involves huge computational complexities. A different class of time-series models called structural time-series model (STM) (in its multivariate form) has been introduced in this paper to develop a parsimonious and computationally simple multivariate short-term traffic condition forecasting algorithm. The different components of a time-series data set such as trend, seasonal, cyclical, and calendar variations can separately be modeled in STM methodology. A case study at the Dublin, Ireland, city center with serious traffic congestion is performed to illustrate the forecasting strategy. The results indicate that the proposed forecasting algorithm is an effective approach in predicting real-time traffic flow at multiple junctions within an urban transport network.
Dynamic transfer among alternative controllers and its relation to antiwindup controller design Advanced control strategies and modern consulting provide new challenges for the classical problem of bumpless transfer. It can, for example, be necessary to transfer between an only approximately known existing analog controller and a new digital or adaptive controller without accessing any states. Transfer ought to be bidirectional and not presuppose steady state, so that an immediate back-transfer is possible if the new controller should drive the plant unstable. We present a scheme that meets these requirements. By casting the problem of bidirectional transfer into an associated tracking control problem, systematic analysis and design procedures from control theory can be applied. The associated control problem also has a correspondence to the design of antiwindup controllers. The paper includes laboratory and industrial applications.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Energy Efficient Dynamic Offloading in Mobile Edge Computing for Internet of Things With proliferation of computation-intensive Internet of Things (IoT) applications, the limited capacity of end devices can deteriorate service performance. To address this issue, computation tasks can be offloaded to the Mobile Edge Computing (MEC) for processing. However, it consumes considerable energy to transmit and process these tasks. In this paper, we study the energy efficient task offloading in MEC. Specifically, we formulate it as a stochastic optimization problem, with the objective of minimizing the energy consumption of task offloading while guaranteeing the average queue length. Solving this offloading optimization problem faces many technical challenges due to the uncertainty and dynamics of wireless channel state and task arrival process, and the large scale of solution space. To tackle these challenges, we apply stochastic optimization techniques to transform the original stochastic problem into a deterministic optimization problem, and propose an energy efficient dynamic offloading algorithm called EEDOA. EEDOA can be implemented in an online manner to make the task offloading decisions with polynomial time complexity. Theoretical analysis is provided to demonstrate that EEDOA can approximate the minimal transmission energy consumption while still bounding the queue length. Experiment results are presented which show the EEDOA’s effectiveness.
QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning The Internet of vehicles (IoV) is a large information interaction network that collects information on vehicles, roads and pedestrians. One of the important uses of vehicle networks is to meet the entertainment needs of driving users through communication between vehicles and roadside units (RSUs). Due to the limited storage space of RSUs, determining the content cached in each RSU is a key challenge. With the development of 5G and video editing technology, short video systems have become increasingly popular. Current widely used cache update methods, such as partial file precaching and content popularity- and user interest-based determination, are inefficient for such systems. To solve this problem, this paper proposes a QoE-driven edge caching method for the IoV based on deep reinforcement learning. First, a class-based user interest model is established. Compared with the traditional file popularity- and user interest distribution-based cache update methods, the proposed method is more suitable for systems with a large number of small files. Second, a quality of experience (QoE)-driven RSU cache model is established based on the proposed class-based user interest model. Third, a deep reinforcement learning method is designed to address the QoE-driven RSU cache update issue effectively. The experimental results verify the effectiveness of the proposed algorithm.
Multi-Hop Cooperative Computation Offloading for Industrial IoT–Edge–Cloud Computing Environments The concept of the industrial Internet of things (IIoT) is being widely applied to service provisioning in many domains, including smart healthcare, intelligent transportation, autopilot, and the smart grid. However, because of the IIoT devices’ limited onboard resources, supporting resource-intensive applications, such as 3D sensing, navigation, AI processing, and big-data analytics, remains a challenging task. In this paper, we study the multi-hop computation-offloading problem for the IIoT–edge–cloud computing model and adopt a game-theoretic approach to achieving Quality of service (QoS)-aware computation offloading in a distributed manner. First, we study the computation-offloading and communication-routing problems with the goal of minimizing each task's computation time and energy consumption, formulating the joint problem as a potential game in which the IIoT devices determine their computation-offloading strategies. Second, we apply a free–bound mechanism that can ensure a finite improvement path to a Nash equilibrium. Third, we propose a multi-hop cooperative-messaging mechanism and develop two QoS-aware distributed algorithms that can achieve the Nash equilibrium. Our simulation results show that our algorithms offer a stable performance gain for IIoT in various scenarios and scale well as the device size increases.
QoE-Driven Cache Management for HTTP Adaptive Bit Rate Streaming Over Wireless Networks In this paper, we investigate the problem of optimal content cache management for HTTP adaptive bit rate (ABR) streaming over wireless networks. Specifically, in the media cloud, each content is transcoded into a set of media files with diverse playback rates, and appropriate files will be dynamically chosen in response to channel conditions and screen forms. Our design objective is to maximize the quality of experience (QoE) of an individual content for the end users, under a limited storage budget. Deriving a logarithmic QoE model from our experimental results, we formulate the individual content cache management for HTTP ABR streaming over wireless network as a constrained convex optimization problem. We adopt a two-step process to solve the snapshot problem. First, using the Lagrange multiplier method, we obtain the numerical solution of the set of playback rates for a fixed number of cache copies and characterize the optimal solution analytically. Our investigation reveals a fundamental phase change in the optimal solution as the number of cached files increases. Second, we develop three alternative search algorithms to find the optimal number of cached files, and compare their scalability under average and worst complexity metrics. Our numerical results suggest that, under optimal cache schemes, the maximum QoE measurement, i.e., mean-opinion-score (MOS), is a concave function of the allowable storage size. Our cache management can provide high expected QoE with low complexity, shedding light on the design of HTTP ABR streaming services over wireless networks.
Detecting Low-Quality Workers in QoE Crowdtesting: A Worker Behavior-Based Approach. QoE crowdtesting is increasingly popular among researchers to conduct subjective assessments of network services. Experimenters can easily access a huge pool of human subjects through crowdsourcing platforms. Without any supervision, low-quality workers, however, can threaten the reliability of the assessments. One of the approaches in classifying the quality of workers is to analyze their behavior during the experiments, such as mouse cursor trajectory. However, existing works analyze the trajectory coarsely, which cannot fully extract the imbedded information. In this paper, we propose a novel method to detect low-quality workers in QoE crowdtesting by analyzing the worker behavior. Our approach is to construct a predictive model by using supervised learning algorithms. A quality score is computed by applying existing anti-cheating techniques and human inspections to label the workers. We define a set of ten worker behavior metrics, which quantifies different types of worker behavior, including finer-grained cursor trajectory analysis. A multiclass Naïve Bayes classifier is applied to train a model to predict the quality of workers from the metrics. We have conducted video QoE assessments on Amazon Mechanical Turk and CrowdFlower to collect the worker behavior. Our results show that the error rates of the model trained from four metrics are equal or less than 30%. We further find that combining the predictions from the four different 5-point Likert scale rating methods can improve the success rate in detecting low-quality workers to around 80%. Finally, our method is 16.5% and 42.9% better in precision and recall than CrowdMOS.
QoE-Driven Transmission-Aware Cache Placement and Cooperative Beamforming Design in Cloud-RANs Pre-caching popular videos at base stations (BSs) is a cost-effective way to significantly alleviate the backhaul pressure. With the video caching, the cache placement and the transmission strategy are intertwined with each other and jointly affect the system performance. Furthermore, the cache placement is updated in a much longer timescale than the transmission strategy. In this paper, the long-term transmission-aware cache placement and the short-term transmission strategy are designed to enhance the quality of experience (QoE) for the video streaming in cloud radio access networks (cloud-RANs). Specifically, consider a cache-enabled cloud-RAN, video contents are cached at BSs, and user requests are cooperatively satisfied by multiple BSs via the cooperative beamforming. To improve the weighted sum of users’ QoE, the long-term transmission-aware caching problem in the caching stage and the short-term transmission problem in the delivery stage are respectively formulated, taking into account the backhaul capacity constraint, the transmission power constraint, and the storage size constraint. For the caching problem, the sample average approach is first used to approximate the long-term average QoE value. Then, cache placement strategies are devised in both centralized and distributed manner. For the transmission problem, the full-cooperative beamforming scheme is studied with the optimized cache placement, and an iterative algorithm is proposed. Simulation results show that our proposed transmission-aware cache placement and transmission strategies can achieve higher QoE performance than other cache placement and transmission strategies.
Fast Adaptive Task Offloading in Edge Computing Based on Meta Reinforcement Learning Multi-access edge computing (MEC) aims to extend cloud service to the network edge to reduce network traffic and service latency. A fundamental problem in MEC is how to efficiently offload heterogeneous tasks of mobile applications from user equipment (UE) to MEC hosts. Recently, many deep reinforcement learning (DRL)-based methods have been proposed to learn offloading policies through interacting with the MEC environment that consists of UE, wireless channels, and MEC hosts. However, these methods have weak adaptability to new environments because they have low sample efficiency and need full retraining to learn updated policies for new environments. To overcome this weakness, we propose a task offloading method based on meta reinforcement learning, which can adapt fast to new environments with a small number of gradient updates and samples. We model mobile applications as Directed Acyclic Graphs (DAGs) and the offloading policy by a custom sequence-to-sequence (seq2seq) neural network. To efficiently train the seq2seq network, we propose a method that synergizes the first order approximation and clipped surrogate objective. The experimental results demonstrate that this new offloading method can reduce the latency by up to 25 percent compared to three baselines while being able to adapt fast to new environments.
Vision meets robotics: The KITTI dataset We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
Reliable Computation Offloading for Edge-Computing-Enabled Software-Defined IoV Internet of Vehicles (IoV) has drawn great interest recent years. Various IoV applications have emerged for improving the safety, efficiency, and comfort on the road. Cloud computing constitutes a popular technique for supporting delay-tolerant entertainment applications. However, for advanced latency-sensitive applications (e.g., auto/assisted driving and emergency failure management), cloud computing may result in excessive delay. Edge computing, which extends computing and storage capabilities to the edge of the network, emerges as an attractive technology. Therefore, to support these computationally intensive and latency-sensitive applications in IoVs, in this article, we integrate mobile-edge computing nodes (i.e., mobile vehicles) and fixed edge computing nodes (i.e., fixed road infrastructures) to provide low-latency computing services cooperatively. For better exploiting these heterogeneous edge computing resources, the concept of software-defined networking (SDN) and edge-computing-aided IoV (EC-SDIoV) is conceived. Moreover, in a complex and dynamic IoV environment, the outage of both processing nodes and communication links becomes inevitable, which may have life-threatening consequences. In order to ensure the completion with high reliability of latency-sensitive IoV services, we introduce both partial computation offloading and reliable task allocation with the reprocessing mechanism to EC-SDIoV. Since the optimization problem is nonconvex and NP-hard, a heuristic algorithm, fault-tolerant particle swarm optimization algorithm is designed for maximizing the reliability (FPSO-MR) with latency constraints. Performance evaluation results validate that the proposed scheme is indeed capable of reducing the latency as well as improving the reliability of the EC-SDIoV.
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
An evaluation of direct attacks using fake fingers generated from ISO templates This work reports a vulnerability evaluation of a highly competitive ISO matcher to direct attacks carried out with fake fingers generated from ISO templates. Experiments are carried out on a fingerprint database acquired in a real-life scenario and show that the evaluated system is highly vulnerable to the proposed attack scheme, granting access in over 75% of the attempts (for a high-security operating point). Thus, the study disproves the popular belief of minutiae templates non-reversibility and raises a key vulnerability issue in the use of non-encrypted standard templates. (This article is an extended version of Galbally et al., 2008, which was awarded with the IBM Best Student Paper Award in the track of Biometrics at ICPR 2008).
Collaborative Mobile Charging The limited battery capacity of sensor nodes has become one of the most critical impediments that stunt the deployment of wireless sensor networks (WSNs). Recent breakthroughs in wireless energy transfer and rechargeable lithium batteries provide a promising alternative to power WSNs: mobile vehicles/robots carrying high volume batteries serve as mobile chargers to periodically deliver energy to sensor nodes. In this paper, we consider how to schedule multiple mobile chargers to optimize energy usage effectiveness, such that every sensor will not run out of energy. We introduce a novel charging paradigm, collaborative mobile charging, where mobile chargers are allowed to intentionally transfer energy between themselves. To provide some intuitive insights into the problem structure, we first consider a scenario that satisfies three conditions, and propose a scheduling algorithm, PushWait, which is proven to be optimal and can cover a one-dimensional WSN of infinite length. Then, we remove the conditions one by one, investigating chargers' scheduling in a series of scenarios ranging from the most restricted one to a general 2D WSN. Through theoretical analysis and simulations, we demonstrate the advantages of the proposed algorithms in energy usage effectiveness and charging coverage.
Multiple switching-time-dependent discretized Lyapunov functions/functionals methods for stability analysis of switched time-delay stochastic systems. This paper presents novel approaches for stability analysis of switched linear time-delay stochastic systems under dwell time constraint. Instead of using comparison principle, piecewise switching-time-dependent discretized Lyapunov functions/functionals are introduced to analyze the stability of switched stochastic systems with constant or time-varying delays. These Lyapunov functions/functionals are decreasing during the dwell time and non-increasing at switching instants, which lead to two mode-dependent dwell-time-based delay-independent stability criteria for the switched systems without restricting the stability of the subsystems. Comparison and numerical examples are provided to show the efficiency of the proposed results.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Three-Dimensional Resource Matching for Internet of Things Underlaying Cognitive Capacity Harvesting Networks In this paper, we propose a cognitive capacity harvesting network (CCHN) based Internet of Things (IoT) architecture, which allows the lightweight IoT devices without spectrum monitoring/sensing capabilities to enjoy the benefits of cognitive radio networks (CRNs). We investigate the sum-rate maximization of IoT links in this proposed architecture. In particular, we formulate the considered problem as a three-dimensional (3-D) resource matching between the IoT links, the CR links and the available CR spectrum blocks (CSBs). Then, two approaches, i.e., Hungarian based switching iteration (HBSI) approach and minimum interference clustering based Lagrange relaxation (MICBLR) approach, are proposed to obtain the near-optimal solution. In HBSI approach, the IoT and CR links are divided into a set of IoT and CR links clusters (ICCs). Based on the partition of ICCs, the considered problem can be simplified to a maximum weight bipartite-matching problem and solved by the Hungarian algorithm. Switching iteration is then used to improve the partition of ICCs. To achieve a better tradeoff between the performance and running time, we further propose the MICBLR approach, which contains IoT links clustering according to the minimum interference rule and a Lagrange relaxation (LR) algorithm used to solve the 3-D matching problem between the clusters of IoT links, the CR links, and the available CSBs. Simulations show that the performance of the proposed approaches is close to the exhaustive search (ES) method but with a much shorter running time. Compared with the Nearest sharing based Hungarian (NSBH), Furthest sharing based Hungarian (FSBH), and Random allocation (RA) policies, the proposed approaches can averagely improve the system performance by 33.68%-38.18%.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Virtual special issue computers in human behavior technology enhanced distance learning should not forget how learning happens.
A Comparative Study of Distributed Learning Environments on Learning Outcomes Advances in information and communication technologies have fueled rapid growth in the popularity of technology-supported distributed learning (DL). Many educational institutions, both academic and corporate, have undertaken initiatives that leverage the myriad of available DL technologies. Despite their rapid growth in popularity, however, alternative technologies for DL are seldom systematically evaluated for learning efficacy. Considering the increasing range of information and communication technologies available for the development of DL environments, we believe it is paramount for studies to compare the relative learning outcomes of various technologies.In this research, we employed a quasi-experimental field study approach to investigate the relative learning effectiveness of two collaborative DL environments in the context of an executive development program. We also adopted a framework of hierarchical characteristics of group support system (GSS) technologies, outlined by DeSanctis and Gallupe (1987), as the basis for characterizing the two DL environments.One DL environment employed a simple e-mail and listserv capability while the other used a sophisticated GSS (herein referred to as Beta system). Interestingly, the learning outcome of the e-mail environment was higher than the learning outcome of the more sophisticated GSS environment. The post-hoc analysis of the electronic messages indicated that the students in groups using the e-mail system exchanged a higher percentage of messages related to the learning task. The Beta system users exchanged a higher level of technology sense-making messages. No significant difference was observed in the students' satisfaction with the learning process under the two DL environments.
The Representation of Virtual Reality in Education Students' opinions about the opportunities and the implications of VR in instruction were investigated by administering a questionnaire to humanities and engineering undergraduates. The questionnaire invited participants to rate a series of statements concerning motivation and emotion, skills, cognitive styles, benefits and learning outcomes associated with the use of VR in education. The representation which emerged was internally consistent and articulated into specific dimensions. It was not affected by gender, by the previous use of VR software, or by the knowledge of the main topics concerning the introduction of IT in instruction. Also the direct participation in a training session based on an immersive VR experience did not influence such a representation, which was partially modulated by the kind of course attended by students.
e-Learning, online learning, and distance learning environments: Are they the same? It is not uncommon that researchers face difficulties when performing meaningful cross-study comparisons for research. Research associated with the distance learning realm can be even more difficult to use as there are different environments with a variety of characteristics. We implemented a mixed-method analysis of research articles to find out how they define the learning environment. In addition, we surveyed 43 persons and discovered that there was inconsistent use of terminology for different types of delivery modes. The results reveal that there are different expectations and perceptions of learning environment labels: distance learning, e-Learning, and online learning.
Isolation And Distinctiveness In The Design Of E-Learning Systems Influence User Preferences When faced with excessive detail in an online environment, typical users have difficulty processing all the elements of representation. This in turn creates cognitive overload, which narrows the user's focus to a few select items. In the context of e-learning, we translated this aspect as the learner's demand for a system that facilitates the retrieval of learning content - one in which the representation is easy to read and understand. We hypothesized that the representation of content in an e-learning system's design is an important antecedent for learner preferences. The aspects of isolation and distinctiveness were incorporated into the design of e-learning representation as an attempt to promote student cognition. Following its development, the model was empirically validated by conducting a survey of 300 university students. We found that isolation and distinctiveness in the design elements appeared to facilitate the ability of students to read and remember online learning content. This in turn was found to drive user preferences for using e-learning systems. The findings provide designers with managerial insights for enticing learners to continue using e-learning systems.
Motivational and cognitive benefits of training in immersive virtual reality based on multiple assessments The main objective of this study was to examine the effectiveness of immersive virtual reality (VR) as a medium for delivering laboratory safety training. We specifically compare an immersive VR simulation, a desktop VR simulation, and a conventional safety manual. The sample included 105 first year undergraduate engineering students (56 females). We include five types of learning outcomes including post-test enjoyment ratings; pre- to post-test changes in intrinsic motivation and self-efficacy; a post-test multiple choice retention test; and two behavioral transfer tests. Results indicated that the groups did not differ on the immediate retention test, suggesting that all three media were equivalent in conveying the basic knowledge. However, significant differences were observed favoring the immersive VR group compared to the text group on the two transfer tests involving the solving problems in a physical lab setting (d = 0.54, d = 0.57), as well as enjoyment (d = 1.44) and increases in intrinsic motivation (d = 0.69) and self-efficacy (d = 0.60). The desktop VR group scored significantly higher than the text group on one transfer test (d = 0.63) but not the other (d= 0.11), as well as enjoyment (d =1.11) and intrinsic motivation (d =0.83).
Virtual and augmented reality effects on K-12, higher and tertiary education students’ twenty-first century skills The purpose of this review article is to present state-of-the-art approaches and examples of virtual reality/augmented reality (VR/AR) systems, applications and experiences which improve student learning and the generalization of skills to the real world. Thus, we provide a brief, representative and non-exhaustive review of the current research studies, in order to examine the effects, as well as the impact of VR/AR technologies on K-12, higher and tertiary education students’ twenty-first century skills and their overall learning. According to the literature, there are promising results indicating that VR/AR environments improve learning outcomes and present numerous advantages of investing time and financial resources in K-12, higher and tertiary educational settings. Technological tools such as VR/AR improve digital-age literacy, creative thinking, communication, collaboration and problem solving ability, which constitute the so-called twenty-first century skills, necessary to transform information rather than just receive it. VR/AR enhances traditional curricula in order to enable diverse learning needs of students. Research and development relative to VR/AR technology is focused on a whole ecosystem around smart phones, including applications and educational content, games and social networks, creating immersive three-dimensional spatial experiences addressing new ways of human–computer interaction. Raising the level of engagement, promoting self-learning, enabling multi-sensory learning, enhancing spatial ability, confidence and enjoyment, promoting student-centered technology, combination of virtual and real objects in a real setting and decreasing cognitive load are some of the pedagogical advantages discussed. Additionally, implications of a growing VR/AR industry investment in educational sector are provided. It can be concluded that despite the fact that there are various barriers and challenges in front of the adoption of virtual reality on educational practices, VR/AR applications provide an effective tool to enhance learning and memory, as they provide immersed multimodal environments enriched by multiple sensory features.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Reliable Computation Offloading for Edge-Computing-Enabled Software-Defined IoV Internet of Vehicles (IoV) has drawn great interest recent years. Various IoV applications have emerged for improving the safety, efficiency, and comfort on the road. Cloud computing constitutes a popular technique for supporting delay-tolerant entertainment applications. However, for advanced latency-sensitive applications (e.g., auto/assisted driving and emergency failure management), cloud computing may result in excessive delay. Edge computing, which extends computing and storage capabilities to the edge of the network, emerges as an attractive technology. Therefore, to support these computationally intensive and latency-sensitive applications in IoVs, in this article, we integrate mobile-edge computing nodes (i.e., mobile vehicles) and fixed edge computing nodes (i.e., fixed road infrastructures) to provide low-latency computing services cooperatively. For better exploiting these heterogeneous edge computing resources, the concept of software-defined networking (SDN) and edge-computing-aided IoV (EC-SDIoV) is conceived. Moreover, in a complex and dynamic IoV environment, the outage of both processing nodes and communication links becomes inevitable, which may have life-threatening consequences. In order to ensure the completion with high reliability of latency-sensitive IoV services, we introduce both partial computation offloading and reliable task allocation with the reprocessing mechanism to EC-SDIoV. Since the optimization problem is nonconvex and NP-hard, a heuristic algorithm, fault-tolerant particle swarm optimization algorithm is designed for maximizing the reliability (FPSO-MR) with latency constraints. Performance evaluation results validate that the proposed scheme is indeed capable of reducing the latency as well as improving the reliability of the EC-SDIoV.
Trust in Automation: Designing for Appropriate Reliance. Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.
A Model for Understanding How Virtual Reality Aids Complex Conceptual Learning Designers and evaluators of immersive virtual reality systems have many ideas concerning how virtual reality can facilitate learning. However, we have little information concerning which of virtual reality's features provide the most leverage for enhancing understanding or how to customize those affordances for different learning environments. In part, this reflects the truly complex nature of learning. Features of a learning environment do not act in isolation; other factors such as the concepts or skills to be learned, individual characteristics, the learning experience, and the interaction experience all play a role in shaping the learning process and its outcomes. Through Project Science Space, we have been trying to identify, use, and evaluate immersive virtual reality's affordances as a means to facilitate the mastery of complex, abstract concepts. In doing so, we are beginning to understand the interplay between virtual reality's features and other important factors in shaping the learning process and learning outcomes for this type of material. In this paper, we present a general model that describes how we think these factors work together and discuss some of the lessons we are learning about virtual reality's affordances in the context of this model for complex conceptual learning.
Cost-Effective Authentic and Anonymous Data Sharing with Forward Security Data sharing has never been easier with the advances of cloud computing, and an accurate analysis on the shared data provides an array of benefits to both the society and individuals. Data sharing with a large number of participants must take into account several issues, including efficiency, data integrity and privacy of data owner. Ring signature is a promising candidate to construct an anonymous and authentic data sharing system. It allows a data owner to anonymously authenticate his data which can be put into the cloud for storage or analysis purpose. Yet the costly certificate verification in the traditional public key infrastructure (PKI) setting becomes a bottleneck for this solution to be scalable. Identity-based (ID-based) ring signature, which eliminates the process of certificate verification, can be used instead. In this paper, we further enhance the security of ID-based ring signature by providing forward security: If a secret key of any user has been compromised, all previous generated signatures that include this user still remain valid. This property is especially important to any large scale data sharing system, as it is impossible to ask all data owners to reauthenticate their data even if a secret key of one single user has been compromised. We provide a concrete and efficient instantiation of our scheme, prove its security and provide an implementation to show its practicality.
Distributed Kalman consensus filter with event-triggered communication: Formulation and stability analysis. •The problem of distributed state estimation in sensor networks with event-triggered communication schedules on both sensor-to-estimator channel and estimator-to-estimator channel is studied.•An event-triggered KCF is designed by deriving the optimal Kalman gain matrix which minimizes the mean squared error.•A computational scalable form of the proposed filter is presented by some approximations.•An appropriate choice of the consensus gain matrix is provided to ensure the stochastic stability of the proposed filter.
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
A Collaborative V2X Data Correction Method for Road Safety Driving safety is one of the most important points to concern on the road. Vehicles constantly generate messages under vehicle-to-everything (V2X) assisted driving. Especially, in dense urban environments, the massive messages carrying precise data can help us to improve road safety. However, vehicles do not always provide accurate data due to a variety of reasons, such as defective vehicle sensors, or selfish. It is critical to check and analyze the data supplied by vehicles in real time and correct the possible errors to eliminate the unsafe issues. In this article, we introduce a cOllaborative vehiClE dAta correctioN method (OCEAN) based on rationality and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$Q$</tex-math></inline-formula> -learning techniques to correct the error V2X data for ensuring the driving safety of vehicles on the road, which can be deployed on both vehicles and road side unit. Extensive experimental results show that OCEAN can detect error V2X data up to 80 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\%$</tex-math></inline-formula> and cut down 60 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\%$</tex-math></inline-formula> average error distance for most attributes in vehicle data.
Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS We describe a real-time, low-cost system to localize a mobile robot in outdoor environments. Our system relies on stereo vision to robustly estimate frame-to-frame motion in real time (also known as visual odometry). The motion estimation problem is formulated efficiently in the disparity space and results in accurate and robust estimates of the motion even for a small-baseline configuration. Our system uses inertial measurements to fill in motion estimates when visual odometry fails. This incremental motion is then fused with a low-cost GPS sensor using a Kalman Filter to prevent long-term drifts. Experimental results are presented for outdoor localization in moderately sized environments (\geqslant 100 meters)
Vision based robot localization by ground to satellite matching in GPS-denied situations This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.
Federated Learning in Vehicular Networks: Opportunities and Solutions The emerging advances in personal devices and privacy concerns have given the rise to the concept of Federated Learning. Federated Learning proves its effectiveness and privacy preservation through collaborative local training and updating a shared machine learning model while protecting the individual data-sets. This article investigates a new type of vehicular network concept, namely a Federated Vehicular Network (FVN), which can be viewed as a robust distributed vehicular network. Compared to traditional vehicular networks, an FVN has centralized components and utilizes both DSRC and mmWave communication to achieve more scalable and stable performance. As a result, FVN can be used to support data-/computation-intensive applications such as distributed machine learning and Federated Learning. The article first outlines the enabling technologies of FVN. Then, we briefly discuss the high-level architecture of FVN and explain why such an architecture is adequate for Federated Learning. In addition, we use auxiliary Blockchain-based systems to facilitate transactions and mitigate malicious behaviors. Next, we discuss in detail one key component of FVN, a federated vehicular cloud (FVC), that is used for sharing data and models in FVN. In particular, we focus on the routing inside FVCs and present our solutions and preliminary evaluation results. Finally, we point out open problems and future research directions of this disruptive technology.
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions Although convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a simpler, convolution-free backbone network use-fid for many dense prediction tasks. Unlike the recently-proposed Vision Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer (PVT), which overcomes the difficulties of...
BDFL: A Byzantine-Fault-Tolerance Decentralized Federated Learning Method for Autonomous Vehicle Autonomous Vehicles ($AV$s) take advantage of Machine Learning (ML) for yielding improved experiences of self-driving. However, large-scale collection of $AV$s’ data for training will inevitably result in a privacy leakage problem. Federated Learning (FL) is...
Robust Camera Pose Estimation for Unordered Road Scene Images in Varying Viewing Conditions For continuous performance optimization of camera sensor systems in automated driving, training data from rare corner cases occurring in series production cars are required. In this article, we propose collaborative acquisition of camera images via connected car fleets for synthesis of image sequences from arbitrary road sections which are challenging for machine vision. While allowing a scalable hardware architecture inside the cars, this concept demands to reconstruct the recording locations of the individual images aggregated in the back-end. Varying environmental conditions, dynamic scenes, and small numbers of significant landmarks may hamper camera pose estimation through sparse reconstruction from unordered road scene images. Tackling those problems, we extend a state-of-the-art Structure from Motion pipeline by selecting keypoints based on a semantic image segmentation and removing GPS outliers. We present three challenging image datasets recorded on repetitive test drives under differing environmental conditions for evaluation of our method. The results demonstrate that our optimized pipeline is able to reconstruct the camera viewpoints robustly in the majority of road scenes observed while preserving high image registration rates. Reducing the median deviation from GPS measurements by over 48% for car fleet images, the method increases the accuracy of camera poses dramatically.
A standalone RFID Indoor Positioning System Using Passive Tags Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is ba...
Dyme: Dynamic Microservice Scheduling in Edge Computing Enabled IoT In recent years, the rapid development of mobile edge computing (MEC) provides an efficient execution platform at the edge for Internet-of-Things (IoT) applications. Nevertheless, the MEC also provides optimal resources to different microservices, however, underlying network conditions and infrastructures inherently affect the execution process in MEC. Therefore, in the presence of varying network conditions, it is necessary to optimally execute the available task of end users while maximizing the energy efficiency in edge platform and we also need to provide fair Quality-of-Service (QoS). On the other hand, it is necessary to schedule the microservices dynamically to minimize the total network delay and network price. Thus, in this article, unlike most of the existing works, we propose a dynamic microservice scheduling scheme for MEC. We design the microservice scheduling framework mathematically and also discuss the computational complexity of the scheduling algorithm. Extensive simulation results show that the microservice scheduling framework significantly improves the performance metrics in terms of total network delay, average price, satisfaction level, energy consumption rate (ECR), failure rate, and network throughput over other existing baselines.
Reciprocal N-body Collision Avoidance In this paper, we present a formal approach to reciprocal n-body collision avoidance, where multiple mobile robots need to avoid collisions with each other while moving in a common workspace. In our formulation, each robot acts fully in- dependently, and does not communicate with other robots. Based on the definition of velocity obstacles (5), we derive sufficient conditions for collision-free motion by reducing the problem to solving a low-dimensional linear program. We test our approach on several dense and complex simulation scenarios involving thousands of robots and compute collision-free actions for all of them in only a few millisec- onds. To the best of our knowledge, this method is the first that can guarantee local collision-free motion for a large number of robots in a cluttered workspace.
RFID-based techniques for human-activity detection The iBracelet and the Wireless Identification and Sensing Platform promise the ability to infer human activity directly from sensor readings.
RECIFE-MILP: An Effective MILP-Based Heuristic for the Real-Time Railway Traffic Management Problem The real-time railway traffic management problem consists of selecting appropriate train routes and schedules for minimizing the propagation of delay in case of traffic perturbation. In this paper, we tackle this problem by introducing RECIFE-MILP, a heuristic algorithm based on a mixed-integer linear programming model. RECIFE-MILP uses a model that extends one we previously proposed by including additional elements characterizing railway reality. In addition, it implements performance boosting methods selected among several ones through an algorithm configuration tool. We present a thorough experimental analysis that shows that the performances of RECIFE-MILP are better than the ones of the currently implemented traffic management strategy. RECIFE-MILP often finds the optimal solution to instances within the short computation time available in real-time applications. Moreover, RECIFE-MILP is robust to its configuration if an appropriate selection of the combination of boosting methods is performed.
A Covert Channel Over VoLTE via Adjusting Silence Periods. Covert channels represent unforeseen communication methods that exploit authorized overt communication as the carrier medium for covert messages. Covert channels can be a secure and effective means of transmitting confidential information hidden in overt traffic. For covert timing channel, the covert message is usually modulated into inter-packet delays (IPDs) of legitimate traffic, which is not suitable for voice over LTE (VoLTE) since the IPDs of VoLTE traffic are fixed to lose the possibility of being modulated. For this reason, we propose a covert channel via adjusting silence periods, which modulates covert message by the postponing or extending silence periods in VoLTE traffic. To keep the robustness, we employ the Gray code to encode the covert message to reduce the impact of packet loss. Moreover, the proposed covert channel enables the tradeoff between the robustness and voice quality which is an important performance indicator for VoLTE. The experiment results show that the proposed covert channel is undetectable by statistical tests and outperforms the other covert channels based on IPDs in terms of robustness.
Pricing-Based Channel Selection for D2D Content Sharing in Dynamic Environment In order to make device-to-device (D2D) content sharing give full play to its advantage of improving local area services, one of the important issues is to decide the channels that D2D pairs occupy. Most existing works study this issue in static environment, and ignore the guidance for D2D pairs to select the channel adaptively. In this paper, we investigate this issue in dynamic environment where...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements In this paper, the extended Kalman filtering problem is investigated for a class of nonlinear systems with multiple missing measurements over a finite horizon. Both deterministic and stochastic nonlinearities are included in the system model, where the stochastic nonlinearities are described by statistical means that could reflect the multiplicative stochastic disturbances. The phenomenon of measurement missing occurs in a random way and the missing probability for each sensor is governed by an individual random variable satisfying a certain probability distribution over the interval [0,1]. Such a probability distribution is allowed to be any commonly used distribution over the interval [0,1] with known conditional probability. The aim of the addressed filtering problem is to design a filter such that, in the presence of both the stochastic nonlinearities and multiple missing measurements, there exists an upper bound for the filtering error covariance. Subsequently, such an upper bound is minimized by properly designing the filter gain at each sampling instant. It is shown that the desired filter can be obtained in terms of the solutions to two Riccati-like difference equations that are of a form suitable for recursive computation in online applications. An illustrative example is given to demonstrate the effectiveness of the proposed filter design scheme.
Self-triggered coordination of robotic networks for optimal deployment This paper studies a deployment problem for a group of robots where individual agents operate with outdated information about each other's locations. Our objective is to understand to what extent outdated information is still useful and at which point it becomes essential to obtain new, up-to-date information. We propose a self-triggered coordination algorithm based on spatial partitioning techniques with uncertain information. We analyze its correctness in synchronous and asynchronous scenarios, and establish the same convergence guarantees that a synchronous algorithm with perfect information at all times would achieve. The technical approach combines computational geometry, set-valued stability analysis, and event-based systems.
Robust Control for Mobility and Wireless Communication in Cyber-Physical Systems With Application to Robot Teams. In this paper, a system architecture to provide end-to-end network connectivity for autonomous teams of robots is discussed. The core of the proposed system is a cyber-physical controller whose goal is to ensure network connectivity as robots move to accomplish their assigned tasks. Due to channel quality uncertainties inherent to wireless propagation, we adopt a stochastic model where achievable ...
Consensus Control Under Communication Delay in a Three-Robot System: Design and Experiments A consensus-control protocol is designed and implemented here on a three-robot system arranged on a horizontal platform, in which a camera system is used to track robot positions, and a personal computer broadcasts commands to the robots based on this protocol via a Bluetooth connection, where such commands are affected by time delays. The design involves some salient features of this protocol based on a graph-based approach, an input–output linearization scheme, and addressing uncertainties in the control problem. By implementing this design on experiments, we show that consensus of the robots can be successfully achieved, and their speed of reaching consensus can be systematically improved. Experimental results also strongly agree with those obtained from nonlinear simulations.
Cooperative Robot Localization Using Event-Triggered Estimation This paper describes a novel communication-spare cooperative localization algorithm for a team of mobile unmanned robotic vehicles. Exploiting an event-based estimation paradigm, robots only send measurements to neighbors when the expected innovation for state estimation is high. Because agents know the event-triggering condition for measurements to be sent, the lack of a measurement is thus also informative and fused into state estimates. The robots use a covariance intersection mechanism to occasionally synchronize their local estimates of the full network state. In addition, heuristic balancing dynamics on the robots' covariance-intersection-triggering thresholds ensure that, in large-diameter networks, the local error covariances remains below desired bounds across the network. Simulations on both linear and nonlinear dynamics/measurement models show that the event-triggering approach achieves nearly optimal state estimation performance in a wide range of operating conditions, even when using only a fraction of the communication cost required by conventional full data sharing. The robustness of the proposed approach to lossy communications as well as the relationship between network topology and covariance-intersection-based synchronization requirements are also examined.
Communication in reactive multiagent robotic systems Multiple cooperating robots are able to complete many tasks more quickly and reliably than one robot alone. Communication between the robots can multiply their capabilities and effectiveness, but to what extent? In this research, the importance of communication in robotic societies is investigated through experiments on both simulated and real robots. Performance was measured for three different types of communication for three different tasks. The levels of communication are progressively more complex and potentially more expensive to implement. For some tasks, communication can significantly improve performance, but for others inter-agent communication is apparently unnecessary. In cases where communication helps, the lowest level of communication is almost as effective as the more complex type. The bulk of these results are derived from thousands of simulations run with randomly generated initial conditions. The simulation results help determine appropriate parameters for the reactive control system which was ported for tests on Denning mobile robots.
State resetting for bumpless switching in supervisory control In this paper the realization and implementation of a multi-controller scheme made of a finite set of linear single-input-single-output controllers, possibly having different state dimensions, is studied. The supervisory control framework is considered, namely a minimal parameter dependent realization of the set of controllers such that all controllers share the same state space is used. A specific state resetting strategy based on the behavioral approach to system theory is developed in order to master the transient upon controller switching.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Constraint-handling in nature-inspired numerical optimization: Past, present and future. In their original versions, nature-inspired search algorithms such as evolutionary algorithms and those based on swarm intelligence, lack a mechanism to deal with the constraints of a numerical optimization problem. Nowadays, however, there exists a considerable amount of research devoted to design techniques for handling constraints within a nature-inspired algorithm. This paper presents an analysis of the most relevant types of constraint-handling techniques that have been adopted with nature-inspired algorithms. From them, the most popular approaches are analyzed in more detail. For each of them, some representative instantiations are further discussed. In the last part of the paper, some of the future trends in the area, which have been only scarcely explored, are briefly discussed and then the conclusions of this paper are presented.
Simultaneous localization and mapping: part I he simultaneous localization and mapping (SLAM) problem asks if it is possible for a mobile robot to be placed at an unknown location in an unknown envi- ronment and for the robot to incrementally build a consistent map of this environment while simultaneously determining its location within this map. A solution to the SLAM problem has been seen as a "holy grail" for the mobile robotics com- munity as it would provide the means to make a robot truly autonomous. The "solution" of the SLAM problem has been one of the notable successes of the robotics community over the past decade. SLAM has been formulated and solved as a theoretical problem in a number of different forms. SLAM has also been implemented in a number of different domains from indoor robots to outdoor, underwater, and airborne systems. At a theoretical and conceptual level, SLAM can now be consid- ered a solved problem. However, substantial issues remain in practically realizing more general SLAM solutions and notably in building and using perceptually rich maps as part of a SLAM algorithm. This two-part tutorial and survey of SLAM aims to provide a broad introduction to this rapidly growing field. Part I (this article) begins by providing a brief history of early develop- ments in SLAM. The formulation section introduces the struc- ture the SLAM problem in now standard Bayesian form, and explains the evolution of the SLAM process. The solution sec- tion describes the two key computational solutions to the SLAM problem through the use of the extended Kalman filter (EKF-SLAM) and through the use of Rao-Blackwellized par- ticle filters (FastSLAM). Other recent solutions to the SLAM problem are discussed in Part II of this tutorial. The application section describes a number of important real-world implemen- tations of SLAM and also highlights implementations where the sensor data and software are freely down-loadable for other researchers to study. Part II of this tutorial describes major issues in computation, convergence, and data association in SLAM. These are subjects that have been the main focus of the SLAM research community over the past five years.
Design and simulation of a joint-coupled orthosis for regulating FES-aided gait A hybrid functional electrical stimulation (FES)/orthosis system is being developed which combines two channels of (surface-electrode-based) electrical stimulation with a computer-controlled orthosis for the purpose of restoring gait to spinal cord injured (SCI) individuals (albeit with a stability aid, such as a walker). The orthosis is an energetically passive, controllable device which 1) unidirectionally couples hip to knee flexion; 2) aids hip and knee flexion with a spring assist; and 3) incorporates sensors and modulated friction brakes, which are used in conjunction with electrical stimulation for the feedback control of joint (and therefore limb) trajectories. This paper describes the hybrid FES approach and the design of the joint coupled orthosis. A dynamic simulation of an SCI individual using the hybrid approach is described, and results from the simulation are presented that indicate the promise of the JCO approach.
Segmentation-Based Image Copy-Move Forgery Detection Scheme In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.12
0.12
0.12
0.12
0.12
0.02
0.0025
0
0
0
0
0
0
0
Fitness-Distance Balance with Functional Weights: A New Selection Method for Evolutionary Algorithms In 2019, a new selection method, named fitness-distance balance (FDB), was proposed. FDB has been proved to have a significant effect on improving the search capability for evolutionary algorithms. But it still suffers from poor flexibility when encountering various optimization problems. To address this issue, we propose a functional weights-enhanced FDB (FW). These functional weights change the original weights in FDB from fixed values to randomly generated ones by a distribution function, thereby enabling the algorithm to select more suitable individuals during the search. As a case study, FW is incorporated into the spherical search algorithm. Experimental results based on various IEEE CEC2017 benchmark functions demonstrate the effectiveness of FW.
A study on the use of statistical tests for experimentation with neural networks: Analysis of parametric test conditions and non-parametric tests In this paper, we focus on the experimental analysis on the performance in artificial neural networks with the use of statistical tests on the classification task. Particularly, we have studied whether the sample of results from multiple trials obtained by conventional artificial neural networks and support vector machines checks the necessary conditions for being analyzed through parametrical tests. The study is conducted by considering three possibilities on classification experiments: random variation in the selection of test data, the selection of training data and internal randomness in the learning algorithm. The results obtained state that the fulfillment of these conditions are problem-dependent and indefinite, which justifies the need of using non-parametric statistics in the experimental analysis.
A Multi-Layered Immune System For Graph Planarization Problem This paper presents a new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems. The proposed methodology is composed of five layers. After expressing the problem as, a suitable representation in the first layer, the search space and the features of the problem are estimated and extracted in the second and third layers, respectively. Through taking advantage of the minimized search space from estimation and the heuristic information from extraction, the antibodies (or solutions) are evolved in the fourth layer and finally the fittest antibody is exported. In order to demonstrate the efficiency of the proposed system, the graph planarization problem is tested. Simulation results based on several benchmark instances show that the proposed algorithm performs better than traditional algorithms.
From evolutionary computation to the evolution of things Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.
Implementing a GPU-based parallel MAX-MIN Ant System The MAX–MIN Ant System (MMAS) is one of the best-known Ant Colony Optimization (ACO) algorithms proven to be efficient at finding satisfactory solutions to many difficult combinatorial optimization problems. The slow-down in Moore’s law, and the availability of graphics processing units (GPUs) capable of conducting general-purpose computations at high speed, has sparked considerable research efforts into the development of GPU-based ACO implementations. In this paper, we discuss a range of novel ideas for improving the GPU-based parallel MMAS implementation, allowing it to better utilize the computing power offered by two subsequent Nvidia GPU architectures. Specifically, based on the weighted reservoir sampling algorithm we propose a novel parallel implementation of the node selection procedure, which is at the heart of the MMAS and other ACO algorithms. We also present a memory-efficient implementation of another key-component – the tabu list structure – which is used in the ACO’s solution construction stage. The proposed implementations, combined with the existing approaches, lead to a total of six MMAS variants, which are evaluated on a set of Traveling Salesman Problem (TSP) instances ranging from 198 to 3795 cities. The results show that our MMAS implementation is competitive with state-of-the-art GPU-based and multi-core CPU-based parallel ACO implementations: in fact, the times obtained for the Nvidia V100 Volta GPU were up to 7.18x and 21.79x smaller, respectively. The fastest of the proposed MMAS variants is able to generate over 1 million candidate solutions per second when solving a 1002-city instance. Moreover, we show that, combined with the 2-opt local search heuristic, the proposed parallel MMAS finds high-quality solutions for the TSP instances with up to 18,512 nodes.
Recent Advances in Evolutionary Computation Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of “biological evolution” toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily “evolutionary”. This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
Energy-Optimized Partial Computation Offloading in Mobile-Edge Computing With Genetic Simulated-Annealing-Based Particle Swarm Optimization Smart mobile devices (SMDs) can meet users' high expectations by executing computational intensive applications but they only have limited resources, including CPU, memory, battery power, and wireless medium. To tackle this limitation, partial computation offloading can be used as a promising method to schedule some tasks of applications from resource-limited SMDs to high-performance edge servers. However, it brings communication overhead issues caused by limited bandwidth and inevitably increases the latency of tasks offloaded to edge servers. Therefore, it is highly challenging to achieve a balance between high-resource consumption in SMDs and high communication cost for providing energy-efficient and latency-low services to users. This work proposes a partial computation offloading method to minimize the total energy consumed by SMDs and edge servers by jointly optimizing the offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels, and transmission power of each SMD in each time slot. It jointly considers the execution time of tasks performed in SMDs and edge servers, and transmission time of data. It also jointly considers latency limits, CPU speeds, transmission power limits, available energy of SMDs, and the maximum number of CPU cycles and memories in edge servers. Considering these factors, a nonlinear constrained optimization problem is formulated and solved by a novel hybrid metaheuristic algorithm named genetic simulated annealing-based particle swarm optimization (GSP) to produce a close-to-optimal solution. GSP achieves joint optimization of computation offloading between a cloud data center and the edge, and resource allocation in the data center. Real-life data-based experimental results prove that it achieves lower energy consumption in less convergence time than its three typical peers.
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
Understanding Taxi Service Strategies From Taxi GPS Traces Taxi service strategies, as the crowd intelligence of massive taxi drivers, are hidden in their historical time-stamped GPS traces. Mining GPS traces to understand the service strategies of skilled taxi drivers can benefit the drivers themselves, passengers, and city planners in a number of ways. This paper intends to uncover the efficient and inefficient taxi service strategies based on a large-scale GPS historical database of approximately 7600 taxis over one year in a city in China. First, we separate the GPS traces of individual taxi drivers and link them with the revenue generated. Second, we investigate the taxi service strategies from three perspectives, namely, passenger-searching strategies, passenger-delivery strategies, and service-region preference. Finally, we represent the taxi service strategies with a feature matrix and evaluate the correlation between service strategies and revenue, informing which strategies are efficient or inefficient. We predict the revenue of taxi drivers based on their strategies and achieve a prediction residual as less as 2.35 RMB/h,1 which demonstrates that the extracted taxi service strategies with our proposed approach well characterize the driving behavior and performance of taxi drivers.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.1
0.033333
0
0
0
0
0
0
0
Preliminary Study of Pressure Self-Sensing Miniature Magnetorheological Valves Magnetorheological (MR) valves have been included in many innovative engineering systems, such as dampers, clutches, or brakes, with different fields of application. Their role serves actuation purposes, while integrated displacement and velocity sensing in the actuators has been studied, as well. In this paper, we present the working principle, design, and preliminary results of a miniature MR va...
Magnetorheological Fluid Haptic Shoes for Walking in VR In this article, present RealWalk, a pair of haptic shoes for HMD-based VR, designed to create realistic sensations of ground surface deformation, and texture using Magnetorheological fluid (MR fluid). RealWalk offers a novel interaction scheme through the physical interaction between the shoes, and the ground surfaces while walking in VR. Each shoe consists of two MR fluid actuators, an insole pressure sensor, and a foot position tracker. The MR fluid actuators are designed in the form of multi-stacked disc structure with a long flow path to maximize the flow resistance. With changing the magnetic field intensity in MR fluid actuators based on the ground material in the virtual scene, the viscosity of MR fluid is changed accordingly. When a user steps on the ground with the shoes, the two MR fluid actuators are pressed down, creating a variety of ground material deformation such as snow, mud, and dry sand. We built an interactive VR application, and compared RealWalk with vibrotactile-based haptic shoes in four different VR scenes: grass, sand, mud, and snow. We report that, compared to vibrotactile-haptic shoes, RealWalk provides higher ratings in all scenes for discrimination, realism, and satisfaction. We also report qualitative user feedback for their experiences.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
A survey on sensor networks The advancement in wireless communications and electronics has enabled the development of low-cost sensor networks. The sensor networks can be used for various application areas (e.g., health, military, home). For different application areas, there are different technical issues that researchers are currently resolving. The current state of the art of sensor networks is captured in this article, where solutions are discussed under their related protocol stack layer sections. This article also points out the open research issues and intends to spark new interests and developments in this field.
Factual and Counterfactual Explanations for Black Box Decision Making. The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a ...
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns This paper presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed "uniform" are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Excellent experimental results obtained in true problems of rotation invariance, where the classifier is trained at one particular rotation angle and tested with samples from other rotation angles, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns. These operators characterize the spatial configuration of local image texture and the performance can be further improved by combining them with rotation invariant variance measures that characterize the contrast of local image texture. The joint distributions of these orthogonal measures are shown to be very powerful tools for rotation invariant texture analysis.
ML estimation of a stochastic linear system with the EM algorithm and its application to speech recognition A nontraditional approach to the problem of estimating the parameters of a stochastic linear system is presented. The method is based on the expectation-maximization algorithm and can be considered as the continuous analog of the Baum-Welch estimation algorithm for hidden Markov models. The algorithm is used for training the parameters of a dynamical system model that is proposed for better representing the spectral dynamics of speech for recognition. It is assumed that the observed feature vectors of a phone segment are the output of a stochastic linear dynamical system, and it is shown how the evolution of the dynamics as a function of the segment length can be modeled using alternative assumptions. A phoneme classification task using the TIMIT database demonstrates that the approach is the first effective use of an explicit model for statistical dependence between frames of speech
Image analysis by Bessel-Fourier moments In this paper, we proposed a new set of moments based on the Bessel function of the first kind, named Bessel-Fourier moments (BFMs), which are more suitable than orthogonal Fourier-Mellin and Zernike moments for image analysis and rotation invariant pattern recognition. Compared with orthogonal Fourier-Mellin and Zernike polynomials of the same degree, the new orthogonal radial polynomials have more zeros, and these zeros are more evenly distributed. The Bessel-Fourier moments can be thought of as generalized orthogonalized complex moments. Theoretical and experimental results show that the Bessel-Fourier moments perform better than the orthogonal Fourier-Mellin and Zernike moments (OFMMs and ZMs) in terms of image reconstruction capability and invariant recognition accuracy in noise-free, noisy and smooth distortion conditions.
Local Load Redistribution Attacks in Power Systems With Incomplete Network Information Power grid is one of the most critical infrastructures in a nation and could suffer a variety of cyber attacks. Recent studies have shown that an attacker can inject pre-determined false data into smart meters such that it can pass the residue test of conventional state estimator. However, the calculation of the false data vector relies on the network (topology and parameter) information of the entire grid. In practice, it is impossible for an attacker to obtain all network information of a power grid. Unfortunately, this does not make power systems immune to false data injection attacks. In this paper, we propose a local load redistribution attacking model based on incomplete network information and show that an attacker only needs to obtain the network information of the local attacking region to inject false data into smart meters in the local region without being detected by the state estimator. Simulations on the modified IEEE 14-bus system demonstrate the correctness and effectiveness of the proposed model. The results of this paper reveal the mechanism of local false data injection attacks and highlight the importance and complexity of defending power systems against false data injection attacks.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.2
0.066667
0
0
0
0
0
0
0
0
0
0
0
0
Software Defined Space-Air-Ground Integrated Vehicular Networks: Challenges and Solutions. This article proposes a software defined spaceair- ground integrated network architecture for supporting diverse vehicular services in a seamless, efficient, and cost-effective manner. First, the motivations and challenges for integration of space-air-ground networks are reviewed. Second, a software defined network architecture with a layered structure is presented. To protect the legacy services ...
Vision, Requirements, and Technology Trend of 6G: How to Tackle the Challenges of System Coverage, Capacity, User Data-Rate and Movement Speed Since 5G new radio comes with non-standalone (NSA) and standalone (SA) versions in 3GPP, research on 6G has been on schedule by academics and industries. Though 6G is supposed to have much higher capabilities than 5G, yet there is no clear description about what 6G is. In this article, a comprehensive discussion of 6G is given based on the review of 5G developments, covering visions and requiremen...
Towards secure 5G networks: A Survey. To support various new use cases from vertical industries besides enhanced mobile broadband communication services, the 5G system aims to provide higher speed, lower latency, and massive connectivity to various devices by leveraging the evolution of 4G with the addition of new radio technology, service-based architecture, and cloud infrastructure. The introduction of new technologies, new use cases and people’s growing concerns regarding privacy issues brings new challenges to providing security and privacy protection for 5G. This paper makes an extensive review of the state of the art towards ensuring 5G security and privacy. By analyzing the lessons from the 4G security system, the requirements from new scenarios and models, the challenges resulting from new technology and paradigm, we identify typical security and privacy issues to be solved in 5G. Then, we discuss potential solutions from academia and industry to secure 5G networks from several perspectives, including the overall 5G security framework, core network, radio access network, cloud infrastructure, and the Internet of things. Finally, several key open issues and potential research directions are identified and discussed.
AI and 6G Security: Opportunities and Challenges While 5G is well-known for network cloudification with micro-service based architecture, the next generation networks or the 6G era is closely coupled with intelligent network orchestration and management. Hence, the role of Artificial Intelligence (AI) is immense in the envisioned 6G paradigm. However, the alliance between 6G and AI may also be a double-edged sword in many cases as AI's applicability for protecting or infringing security and privacy. In particular, the end-to-end automation of future networks demands proactive threats discovery, application of mitigation intelligent techniques and making sure the achievement of self-sustaining networks in 6G. Therefore, to consolidate and solidify the role of AI in securing 6G networks, this article presents how AI can be leveraged in 6G security, possible challenges and solutions.
Redefining Wireless Communication for 6G: Signal Processing Meets Deep Learning With Deep Unfolding The year 2019 witnessed the rollout of the 5G standard, which promises to offer significant data rate improvement over 4G. While 5G is still in its infancy, there has been an increased shift in the research community for communication technologies beyond 5G. The recent emergence of machine learning approaches for enhancing wireless communications and empowering them with much-desired intelligence ...
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
JPEG Error Analysis and Its Applications to Digital Image Forensics JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
The FERET Evaluation Methodology for Face-Recognition Algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.
Neural fitted q iteration – first experiences with a data efficient neural reinforcement learning method This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
Neural network adaptive tracking control for a class of uncertain switched nonlinear systems. •Study the method of the tracking control of the switched uncertain nonlinear systems under arbitrary switching signal controller.•A multilayer neural network adaptive controller with multilayer weight norm adaptive estimation is been designed.•The adaptive law is expand from calculation the second layer weight of neural network to both of the two layers weight.•The controller proposed improve the tracking error performance of the closed-loop system greatly.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems. The vehicular edge computing system integrates the computing resources of vehicles, and provides computing services for other vehicles and pedestrians with task offloading. However, the vehicular task offloading environment is dynamic and uncertain, with fast varying network topologies, wireless channel states, and computing workloads. These uncertainties bring extra challenges to task offloading. In this paper, we consider the task offloading among vehicles, and propose a solution that enables vehicles to learn the offloading delay performance of their neighboring vehicles while offloading computation tasks. We design an adaptive learning based task offloading (ALTO) algorithm based on the multi-armed bandit theory, in order to minimize the average offloading delay. ALTO works in a distributed manner without requiring frequent state exchange, and is augmented with input-awareness and occurrence-awareness to adapt to the dynamic environment. The proposed algorithm is proved to have a sublinear learning regret. Extensive simulations are carried out under both synthetic scenario and realistic highway scenario, and results illustrate that the proposed algorithm achieves low delay performance, and decreases the average delay up to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$30\%$</tex-math></inline-formula> compared with the existing upper confidence bound based learning algorithm.
Global optimization advances in Mixed-Integer Nonlinear Programming, MINLP, and Constrained Derivative-Free Optimization, CDFO. •We review the recent advances in global optimization for Mixed Integer Nonlinear Programming, MINLP.•We review the recent advances in global optimization for Constrained Derivative-Free optimization, CDFO.•We present theoretical contributions, software implementations and applications for both MINLP and CDFO.•We discuss possible interactions between the two areas of MINLP and CDFO.•We present a complete test suite for MINLP and CDFO algorithms.
Energy-Efficient Dynamic Task Offloading for Energy Harvesting Mobile Cloud Computing Mobile-edge cloud computing (MEC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and prolong the lifetime of mobile devices (MDs) by offloading computation-intensive tasks to the cloud. This paper considers applying simultaneous wireless information and power transfer (SWIPT) technique to a multi-user computation offloading problem for mobile-edge cloud computing, where energy-limited mobile devices (MDs) harvest energy form the ambient radio-frequency (RF) signal. We investigate partial computation offloading by jointly optimizing MDs' clock frequency, transmit power and offloading ratio with the system design objective of minimizing energy cost of mobile devices. To this end, we first formulate an energy cost minimization problem constrained by task completion time and finite mobile- edge cloud computation capacity. Then, by exploiting alternative optimization (AO) based on difference of convex function (DC) programming and linear programming, we design an iterative algorithm for clock frequency control, transmission power allocation, offloading ratio and power splitting ratio to solve the non-convex optimization problem. Our simulation results reveal that the proposed algorithm can converge within a few iterations and yield minimum system energy cost.
Distributed Mechanism for Computation Offloading Task Routing in Mobile Edge Cloud Network The paper proposes a distributed mechanism for computation offloading task routing and dispatching in Mobile Edge Cloud (MEC) networks, based on an evolutional Internet framework named Big IP (BPP). The necessary information for computation offloading requests and server status updates is proposed in the paper, which can be carried in the Instructions and Metadata blocks as the extensions to the current IP packet. The numerical results are given to show the performance efficiency of the proposed BPP based mechanism in providing the high precision latency guarantee and reducing the control overhead for computation offloading.
Intelligent Edge Computing in Internet of Vehicles: A Joint Computation Offloading and Caching Solution Recently, Internet of Vehicles (IoV) has become one of the most active research fields in both academic and industry, which exploits resources of vehicles and Road Side Units (RSUs) to execute various vehicular applications. Due to the increasing number of vehicles and the asymmetrical distribution of traffic flows, it is essential for the network operator to design intelligent offloading strategies to improve network performance and provide high-quality services for users. However, the lack of global information and the time-variety of IoVs make it challenging to perform effective offloading and caching decisions under long-term energy constraints of RSUs. Since Artificial Intelligence (AI) and machine learning can greatly enhance the intelligence and the performance of IoVs, we push AI inspired computing, caching and communication resources to the proximity of smart vehicles, which jointly enable RSU peer offloading, vehicle-to-RSU offloading and content caching in the IoV framework. A Mix Integer Non-Linear Programming (MINLP) problem is formulated to minimize total network delay, consisting of communication delay, computation delay, network congestion delay and content downloading delay of all users. Then, we develop an online multi-decision making scheme (named OMEN) by leveraging Lyapunov optimization method to solve the formulated problem, and prove that OMEN achieves near-optimal performance. Leveraging strong cognition of AI, we put forward an imitation learning enabled branch-and-bound solution in edge intelligent IoVs to speed up the problem solving process with few training samples. Experimental results based on real-world traffic data demonstrate that our proposed method outperforms other methods from various aspects.
Collaborative Computation Offloading for Multiaccess Edge Computing Over Fiber-Wireless Networks. By offloading the computation tasks of the mobile devices (MDs) to the edge server, mobile-edge computing (MEC) provides a new paradigm to meet the increasing computation demands from mobile applications. However, existing mobile-edge computation offloading (MECO) research only took the resource allocation between the MDs and the MEC servers into consideration, and ignored the huge computation res...
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
Revenue-optimal task scheduling and resource management for IoT batch jobs in mobile edge computing With the growing prevalence of Internet of Things (IoT) devices and technology, a burgeoning computing paradigm namely mobile edge computing (MEC) is delicately proposed and designed to accommodate the application requirements of IoT scenario. In this paper, we focus on the problems of dynamic task scheduling and resource management in MEC environment, with the specific objective of achieving the optimal revenue earned by edge service providers. While the majority of task scheduling and resource management algorithms are formulated by an integer programming (IP) problem and solved in a dispreferred NP-hard manner, we innovatively investigate the problem structure and identify a favorable property namely totally unimodular constraints. The totally unimodular property further helps to design an equivalent linear programming (LP) problem which can be efficiently and elegantly solved at polynomial computational complexity. In order to evaluate our proposed approach, we conduct simulations based on real-life IoT dataset to verify the effectiveness and efficiency of our approach.
Efficient k-out-of-n oblivious transfer schemes with adaptive and non-adaptive queries In this paper we propose efficient two-round k-out-of-n oblivious transfer schemes, in which R sends O(k) messages to S, and S sends O(n) messages back to R. The computation cost of R and S is reasonable. The choices of R are unconditionally secure. For the basic scheme, the secrecy of unchosen messages is guaranteed if the Decisional Diffie-Hellman problem is hard. When k=1, our basic scheme is as efficient as the most efficient 1-out-of-n oblivious transfer scheme. Our schemes have the nice property of universal parameters, that is each pair of R and S need neither hold any secret key nor perform any prior setup (initialization). The system parameters can be used by all senders and receivers without any trapdoor specification. Our k-out-of-n oblivious transfer schemes are the most efficient ones in terms of the communication cost, in both rounds and the number of messages. Moreover, one of our schemes can be extended in a straightforward way to an adaptivek-out-of-n oblivious transfer scheme, which allows the receiver R to choose the messages one by one adaptively. In our adaptive-query scheme, S sends O(n) messages to R in one round in the commitment phase. For each query of R, only O(1) messages are exchanged and O(1) operations are performed. In fact, the number k of queries need not be pre-fixed or known beforehand. This makes our scheme highly flexible.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
Adaptive dynamic surface control of a class of nonlinear systems with unknown direction control gains and input saturation. In this paper, adaptive neural network based dynamic surface control (DSC) is developed for a class of nonlinear strict-feedback systems with unknown direction control gains and input saturation. A Gaussian error function based saturation model is employed such that the backstepping technique can be used in the control design. The explosion of complexity in traditional backstepping design is avoided by utilizing DSC. Based on backstepping combined with DSC, adaptive radial basis function neural network control is developed to guarantee that all the signals in the closed-loop system are globally bounded, and the tracking error converges to a small neighborhood of origin by appropriately choosing design parameters. Simulation results demonstrate the effectiveness of the proposed approach and the good performance is guaranteed even though both the saturation constraints and the wrong control direction are occurred.
Distributed Kalman consensus filter with event-triggered communication: Formulation and stability analysis. •The problem of distributed state estimation in sensor networks with event-triggered communication schedules on both sensor-to-estimator channel and estimator-to-estimator channel is studied.•An event-triggered KCF is designed by deriving the optimal Kalman gain matrix which minimizes the mean squared error.•A computational scalable form of the proposed filter is presented by some approximations.•An appropriate choice of the consensus gain matrix is provided to ensure the stochastic stability of the proposed filter.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
SURFER: A Secure SDN-based Routing Protocol for Internet of Vehicles Software-defined networking (SDN) is becoming the most dominant method for network management. By decoupling the control plane from the data plane, SDN provides a centralized view and more flexibility, scalability, and global knowledge of the network. In a previous paper, we presented ROAMER, a routing protocol that exploits roadside units (RSUs) in order to route messages within vehicular ad hoc ...
A Survey of Ant Colony Optimization Based Routing Protocols for Mobile Ad Hoc Networks. Developing highly efficient routing protocols for Mobile Ad hoc NETworks (MANETs) is a challenging task. In order to fulfill multiple routing requirements, such as low packet delay, high packet delivery rate, and effective adaptation to network topology changes with low control overhead, and so on, new ways to approximate solutions to the known NP-hard optimization problem of routing in MANETs have to be investigated. Swarm intelligence (SI)-inspired algorithms have attracted a lot of attention, because they can offer possible optimized solutions ensuring high robustness, flexibility, and low cost. Moreover, they can solve large-scale sophisticated problems without a centralized control entity. A successful example in the SI field is the ant colony optimization (ACO) meta-heuristic. It presents a common framework for approximating solutions to NP-hard optimization problems. ACO has been successfully applied to balance the various routing related requirements in dynamic MANETs. This paper presents a comprehensive survey and comparison of various ACO-based routing protocols in MANETs. The main contributions of this survey include: 1) introducing the ACO principles as applied in routing protocols for MANETs; 2) classifying ACO-based routing approaches reviewed in this paper into five main categories; 3) surveying and comparing the selected routing protocols from the perspective of design and simulation parameters; and 4) discussing open issues and future possible design directions of ACO-based routing protocols.
A Microbial Inspired Routing Protocol for VANETs. We present a bio-inspired unicast routing protocol for vehicular ad hoc networks which uses the cellular attractor selection mechanism to select next hops. The proposed unicast routing protocol based on attractor selecting (URAS) is an opportunistic routing protocol, which is able to change itself adaptively to the complex and dynamic environment by routing feedback packets. We further employ a mu...
Improvement of GPSR Protocol in Vehicular Ad Hoc Network. In a vehicular ad hoc network (VANET), vehicles always move in high-speed which may cause the network topology changes frequently. This is challenging for routing protocols of VANET. Greedy Perimeter Stateless Routing (GPSR) is a representative routing protocol of VANET. However, when constructs routing path, GPSR selects the next hop node which is very easily out of the communication range in greedy forwarding, and builds the path with redundancy in perimeter forwarding. To solve the above-mentioned problems, we proposed Maxduration-Minangle GPSR (MM-GPSR) routing protocol in this paper. In greedy forwarding of MM-GPSR, by defining cumulative communication duration to represent the stability of neighbor nodes, the neighbor node with the maximum cumulative communication duration will be selected as the next hop node. In perimeter forwarding of MM-GPSR when greedy forwarding fails, the concept of minimum angle is introduced as the criterion of the optimal next hop node. By taking the position of neighbor nodes into account and calculating angles formed between neighbors and the destination node, the neighbor node with minimum angle will be selected as the next hop node. By using NS-2 and VanetMobiSim, simulations demonstrate that compared with GPSR, MM-GPSR has obvious improvements in reducing the packet loss rate, decreasing the end-to-end delay and increasing the throughput, and is more suitable for VANET.
Secure Real-Time Traffic Data Aggregation With Batch Verification for Vehicular Cloud in VANETs The vehicular cloud provides many significant advantages to Vehicular ad-hoc Networks (VANETs), such as unlimited storage space, powerful computing capability and timely traffic services. Traffic data aggregation in the vehicular cloud, which can aggregate traffic data from vehicles for further processing and sharing, is very important. Incorrect traffic data feedback may affect traffic safety; therefore, the security of traffic data aggregation should be ensured. In this paper, by using the property of data recovery in the message recovery signature (MRS), we propose a secure real-time traffic data aggregation scheme for vehicular cloud in VANETs. In the proposed scheme, the validity of vehicles’ signatures are verified, and then the original traffic data is recovered from signatures. Moreover, the proposed scheme supports batch verification for multiple vehicles’ signatures. Due to advantages of the MRS, security features such as data confidentiality, privacy preservation and reply attack resistance are preserved. In addition, the comparison and simulation results indicate that the proposed scheme is superior in comparison to previous schemes with respect to the communication and computational cost.
A Survey Of Qos-Aware Routing Protocols For The Manet-Wsn Convergence Scenarios In Iot Networks Wireless Sensor Network (WSN) and Mobile Ad hoc Network (MANET) have attracted a special attention because they can serve as communication means in many areas such as healthcare, military, smart traffic and smart cities. Nowadays, as all devices can be connected to a network forming the Internet of Things (IoT), the integration of WSN, MANET and other networks into IoT is indispensable. We investigate the convergence of WSN and MANET in IoT and consider a fundamental problem, that is, how a converged (WSN-MANET) network provides quality of service (QoS) guarantees to rich multimedia applications. This is very important because the network performances of WSN and MANET are quite low, while multimedia applications always require quality of services at certain levels. In this work, we survey the QoS-guaranteed routing protocols for WSN-MANETs, that are proposed in IEEE Xplore Digital Library over the last decade. Then, basing on our findings, we suggest future open research directions.
Efficient and Secure Routing Protocol Based on Artificial Intelligence Algorithms With UAV-Assisted for Vehicular Ad Hoc Networks in Intelligent Transportation Systems Vehicular Ad hoc Networks (VANETs) that are considered as a subset of Mobile Ad hoc Networks (MANETs) can be applied in the field of transportation especially in Intelligent Transportation Systems (ITS). The routing process in these networks is a challenging task due to rapid topology changes, high vehicle mobility and frequent disconnection of links. Therefore, developing an efficient routing pro...
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Energy-Aware Task Offloading and Resource Allocation for Time-Sensitive Services in Mobile Edge Computing Systems Mobile Edge Computing (MEC) is a promising architecture to reduce the energy consumption of mobile devices and provide satisfactory quality-of-service to time-sensitive services. How to jointly optimize task offloading and resource allocation to minimize the energy consumption subject to the latency requirement remains an open problem, which motivates this paper. When the latency constraint is tak...
Symbolic model checking for real-time systems We describe finite-state programs over real-numbered time in a guarded-command language with real-valued clocks or, equivalently, as finite automata with real-valued clocks. Model checking answers the question which states of a real-time program satisfy a branching-time specification (given in an extension of CTL with clock variables). We develop an algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space. For this purpose, we introduce a μ-calculus on computation trees over real-numbered time. Unfortunately, many standard program properties, such as response for all nonzeno execution sequences (during which time diverges), cannot be characterized by fixpoints: we show that the expressiveness of the timed μ-calculus is incomparable to the expressiveness of timed CTL. Fortunately, this result does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values. All timed CTL properties of such programs are shown to be computable as finitely approximable fixpoints in a simple decidable theory.
The industrial indoor channel: large-scale and temporal fading at 900, 2400, and 5200 MHz In this paper, large-scale fading and temporal fading characteristics of the industrial radio channel at 900, 2400, and 5200 MHz are determined. In contrast to measurements performed in houses and in office buildings, few attempts have been made until now to model propagation in industrial environments. In this paper, the industrial environment is categorized into different topographies. Industrial topographies are defined separately for large-scale and temporal fading, and their definition is based upon the specific physical characteristics of the local surroundings affecting both types of fading. Large-scale fading is well expressed by a one-slope path-loss model and excellent agreement with a lognormal distribution is obtained. Temporal fading is found to be Ricean and Ricean K-factors have been determined. Ricean K-factors are found to follow a lognormal distribution.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
Survey of Fog Computing: Fundamental, Network Applications, and Research Challenges. Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog n...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.24
0.24
0.24
0.24
0.24
0.24
0.06
0
0
0
0
0
0
0
Cross-Modal Contrastive Learning for Text-to-Image Generation The output of text-to-image synthesis systems should be coherent, clear, photo-realistic scenes with high semantic fidelity to their conditioned text descriptions. Our Cross-Modal Contrastive Generative Adversarial Network (XMC-GAN) addresses this challenge by maximizing the mutual information between image and text. It does this via multiple contrastive losses which capture inter-modality and int...
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Transient attributes for high-level understanding and editing of outdoor scenes We live in a dynamic visual world where the appearance of scenes changes dramatically from hour to hour or season to season. In this work we study \"transient scene attributes\" -- high level properties which affect scene appearance, such as \"snow\", \"autumn\", \"dusk\", \"fog\". We define 40 transient attributes and use crowdsourcing to annotate thousands of images from 101 webcams. We use this \"transient attribute database\" to train regressors that can predict the presence of attributes in novel images. We demonstrate a photo organization method based on predicted attributes. Finally we propose a high-level image editing method which allows a user to adjust the attributes of a scene, e.g. change a scene to be \"snowy\" or \"sunset\". To support attribute manipulation we introduce a novel appearance transfer technique which is simple and fast yet competitive with the state-of-the-art. We show that we can convincingly modify many transient attributes in outdoor scenes.
Semantic Understanding of Scenes through the ADE20K Dataset. Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement state-of-the-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects.
Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures. This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW). Sync-DRAW can also perform text-to-video generation which, to the best of our knowledge, makes it the first approach of its kind. It combines a Variational Autoencoder(VAE) with a Recurrent Attention Mechanism in a novel manner to create a temporally dependent sequence of frames that are gradually formed over time. The recurrent attention mechanism in Sync-DRAW attends to each individual frame of the video in sychronization, while the VAE learns a latent distribution for the entire video at the global level. Our experiments with Bouncing MNIST, KTH and UCF-101 suggest that Sync-DRAW is efficient in learning the spatial and temporal information of the videos and generates frames with high structural integrity, and can generate videos from simple captions on these datasets.
Dynamic Facial Expression Generation on Hilbert Hypersphere With Conditional Wasserstein Generative Adversarial Nets In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, w...
Cross-MPI: Cross-scale Stereo for Image Super-Resolution using Multiplane Images Various combinations of cameras enrich computational photography, among which reference-based superresolution (RefSR) plays a critical role in multiscale imaging systems. However, existing RefSR approaches fail to accomplish high-fidelity super-resolution under a large resolution gap, e.g., 8x upscaling, due to the lower consideration of the underlying scene structure. In this paper, we aim to solve the RefSR problem in actual multiscale camera systems inspired by multiplane image (MPI) representation. Specifically, we propose Cross-MPI, an end-to-end RefSR network composed of a novel plane-aware attention-based MPI mechanism, a multiscale guided upsampling module as well as a super-resolution (SR) synthesis and fusion module. Instead of using a direct and exhaustive matching between the cross-scale stereo, the proposed plane-aware attention mechanism fully utilizes the concealed scene structure for efficient attention-based correspondence searching. Further combined with a gentle coarse-to-fine guided upsampling strategy, the proposed Cross-MPI can achieve a robust and accurate detail transmission. Experimental results on both digitally synthesized and optical zoom cross-scale data show that the Cross-MPI framework can achieve superior performance against the existing RefSR methods and is a real fit for actual multiscale camera systems even with large-scale differences.
End-To-End Time-Lapse Video Synthesis From A Single Outdoor Image Time-lapse videos usually contain visually appealing content but are often difficult and costly to create. In this paper, we present an end-to-end solution to synthesize a time-lapse video from a single outdoor image using deep neural networks. Our key idea is to train a conditional generative adversarial network based on existing datasets of time-lapse videos and image sequences. We propose a multi-frame joint conditional generation framework to effectively learn the correlation between the illumination change of an outdoor scene and the time of the day. We further present a multi-domain training scheme for robust training of our generative models from two datasets with different distributions and missing timestamp labels. Compared to alternative time-lapse video synthesis algorithms, our method uses the timestamp as the control variable and does not require a reference video to guide the synthesis of the final output. We conduct ablation studies to validate our algorithm and compare with state-of-the-art techniques both qualitatively and quantitatively.
Sequence to Sequence Learning with Neural Networks. Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
A General Equilibrium Model for Industries with Price and Service Competition This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers' prices and all retailers' service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty.We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms' service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game.We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service.
Mobile cloud computing: A survey Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for future work.
Eye-vergence visual servoing enhancing Lyapunov-stable trackability Visual servoing methods for hand---eye configuration are vulnerable for hand's dynamical oscillation, since nonlinear dynamical effects of whole manipulator stand against the stable tracking ability (trackability). Our proposal to solve this problem is that the controller for visual servoing of the hand and the one for eye-vergence should be separated independently based on decoupling each other, where the trackability is verified by Lyapunov analysis. Then the effectiveness of the decoupled hand and eye-vergence visual servoing method is evaluated through simulations incorporated with actual dynamics of 7-DoF robot with additional 3-DoF for eye-vergence mechanism by amplitude and phase frequency analysis.
An improved E-DRM scheme for mobile environments. With the rapid development of information science and network technology, Internet has become an important platform for the dissemination of digital content, which can be easily copied and distributed through the Internet. Although convenience is increased, it causes significant damage to authors of digital content. Digital rights management system (DRM system) is an access control system that is designed to protect digital content and ensure illegal users from maliciously spreading digital content. Enterprise Digital Rights Management system (E-DRM system) is a DRM system that prevents unauthorized users from stealing the enterprise's confidential data. User authentication is the most important method to ensure digital rights management. In order to verify the validity of user, the biometrics-based authentication protocol is widely used due to the biological characteristics of each user are unique. By using biometric identification, it can ensure the correctness of user identity. In addition, due to the popularity of mobile device and Internet, user can access digital content and network information at anytime and anywhere. Recently, Mishra et al. proposed an anonymous and secure biometric-based enterprise digital rights management system for mobile environment. Although biometrics-based authentication is used to prevent users from being forged, the anonymity of users and the preservation of digital content are not ensured in their proposed system. Therefore, in this paper, we will propose a more efficient and secure biometric-based enterprise digital rights management system with user anonymity for mobile environments.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
Pareto-Optimization for Scheduling of Crude Oil Operations in Refinery via Genetic Algorithm. With the interaction of discrete-event and continuous processes, it is challenging to schedule crude oil operations in a refinery. This paper studies the optimization problem of finding a detailed schedule to realize a given refining schedule. This is a multiobjective optimization problem with a combinatorial nature. Since the original problem cannot be directly solved by using heuristics and meta-heuristics, the problem is transformed into an assignment problem of charging tanks and distillers. Based on such a transformation, by analyzing the properties of the problem, this paper develops a chromosome that can describe a feasible schedule such that meta-heuristics can be applied. Then, it innovatively adopts an improved nondominated sorting genetic algorithm to solve the problem for the first time. An industrial case study is used to test the proposed solution method. The results show that the method makes a significant performance improvement and is applicable to real-life refinery scheduling problems.
People detection and tracking from aerial thermal views Detection and tracking of people in visible-light images has been subject to extensive research in the past decades with applications ranging from surveillance to search-and-rescue. Following the growing availability of thermal cameras and the distinctive thermal signature of humans, research effort has been focusing on developing people detection and tracking methodologies applicable to this sensing modality. However, a plethora of challenges arise on the transition from visible-light to thermal images, especially with the recent trend of employing thermal cameras onboard aerial platforms (e.g. in search-and-rescue research) capturing oblique views of the scenery. This paper presents a new, publicly available dataset of annotated thermal image sequences, posing a multitude of challenges for people detection and tracking. Moreover, we propose a new particle filter based framework for tracking people in aerial thermal images. Finally, we evaluate the performance of this pipeline on our dataset, incorporating a selection of relevant, state-of-the-art methods and present a comprehensive discussion of the merits spawning from our study.
A multi-objective model for the green capacitated location-routing problem considering environmental impact. •The Capacitated Location-Routing Problem considering environmental impact is proposed.•A new model for computing greenhouse gas emissions in vehicle routing is proposed.•The Green CLRP is formulated as a bi-objective mixed integer linear programming.•Using more vehicles can lead to large fuel economy in the long term and hence less emission.•More vehicles in shorter routes and prioritizing high demand clients lead to less emission.
Persistent UAV delivery logistics: MILP formulation and efficient heuristic. •UAV delivery logistics with multiple recharge/reload stations was considered.•UAVs visit station, refill consumables and return to service persistently.•Amount of loaded product effects on the flight time of UAVs during delivery.•Validity of the proposed model was demonstrated via island area delivery example.•Performance of mathematical formulation and heuristic were tested and compared.
Multiperiod Asset Allocation Considering Dynamic Loss Aversion Behavior of Investors In order to study the effect of loss aversion behavior on multiperiod investment decisions, we first introduce some psychological characteristics of dynamic loss aversion and then construct a multiperiod portfolio model by considering a conditional value-at-risk (CVaR) constraint. We then design a variable neighborhood search-based hybrid genetic algorithm to solve the model. We finally study the optimal asset allocation and investment performance of the proposed multiperiod model. Some important metrics, such as the initial loss aversion coefficient and reference point, are used to test the robustness of the model. The result shows that investors with loss aversion tend to centralize most of their wealth and have a better performance than rational investors. The effects of CVaR on investment performance are given. When a market is falling, investors with a higher degree of risk aversion can avoid a large loss and can obtain higher gains.
Cooperative Aerial-Ground Vehicle Route Planning With Fuel Constraints for Coverage Applications. Low-cost unmanned aerial vehicles (UAVs) need multiple refuels to accomplish large area coverage. We propose the use of a mobile ground vehicle (GV), constrained to travel on a given road network, as a refueling station for the UAV. Determining optimal routes for a UAV and GV, and selecting rendezvous locations for refueling to minimize coverage time is NP-hard. We develop a two-stage strategy for...
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Collaborative privacy management The landscape of the World Wide Web with all its versatile services heavily relies on the disclosure of private user information. Unfortunately, the growing amount of personal data collected by service providers poses a significant privacy threat for Internet users. Targeting growing privacy concerns of users, privacy-enhancing technologies emerged. One goal of these technologies is the provision of tools that facilitate a more informative decision about personal data disclosures. A famous PET representative is the PRIME project that aims for a holistic privacy-enhancing identity management system. However, approaches like the PRIME privacy architecture require service providers to change their server infrastructure and add specific privacy-enhancing components. In the near future, service providers are not expected to alter internal processes. Addressing the dependency on service providers, this paper introduces a user-centric privacy architecture that enables the provider-independent protection of personal data. A central component of the proposed privacy infrastructure is an online privacy community, which facilitates the open exchange of privacy-related information about service providers. We characterize the benefits and the potentials of our proposed solution and evaluate a prototypical implementation.
Cognitive Cars: A New Frontier for ADAS Research This paper provides a survey of recent works on cognitive cars with a focus on driver-oriented intelligent vehicle motion control. The main objective here is to clarify the goals and guidelines for future development in the area of advanced driver-assistance systems (ADASs). Two major research directions are investigated and discussed in detail: 1) stimuli–decisions–actions, which focuses on the driver side, and 2) perception enhancement–action-suggestion–function-delegation, which emphasizes the ADAS side. This paper addresses the important achievements and major difficulties of each direction and discusses how to combine the two directions into a single integrated system to obtain safety and comfort while driving. Other related topics, including driver training and infrastructure design, are also studied.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks Federated learning (FL) is a promising solution to privacy preservation for data-driven deep learning approaches. However, enabling FL in unmanned aerial vehicle (UAV)-assisted wireless networks is still challenging due to limited resources and battery capacity in the UAV and user devices. In this regard, we propose a deep reinforcement learning (DRL)-based framework for joint UAV placement and re...
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
A dynamic N threshold prolong lifetime method for wireless sensor nodes. Ubiquitous computing is a technology to assist many computers available around the physical environment at any place and anytime. This service tends to be invisible from users in everyday life. Ubiquitous computing uses sensors extensively to provide important information such that applications can adjust their behavior. A Wireless Sensor Network (WSN) has been applied to implement such an architecture. To ensure continuous service, a dynamic N threshold power saving method for WSN is developed. A threshold N has been derived to obtain minimum power consumption for the sensor node while considering each different data arrival rate. We proposed a theoretical analysis regarding the probability variation for each state considering different arrival rate, service rate and collision probability. Several experiments have been conducted to demonstrate the effectiveness of our research. Our method can be applied to prolong the service time of a ubiquitous computing network to cope with the network disconnection issue.
Fuzzy Mathematical Programming and Self-Adaptive Artificial Fish Swarm Algorithm for Just-in-Time Energy-Aware Flow Shop Scheduling Problem With Outsourcing Option Flow shop scheduling (FSS) problem constitutes a major part of production planning in every manufacturing organization. It aims at determining the optimal sequence of processing jobs on available machines within a given customer order. In this article, a novel biobjective mixed-integer linear programming (MILP) model is proposed for FSS with an outsourcing option and just-in-time delivery in order to simultaneously minimize the total cost of the production system and total energy consumption. Each job is considered to be either scheduled in-house or to be outsourced to one of the possible subcontractors. To efficiently solve the problem, a hybrid technique is proposed based on an interactive fuzzy solution technique and a self-adaptive artificial fish swarm algorithm (SAAFSA). The proposed model is treated as a single objective MILP using a multiobjective fuzzy mathematical programming technique based on the ε-constraint, and SAAFSA is then applied to provide Pareto optimal solutions. The obtained results demonstrate the usefulness of the suggested methodology and high efficiency of the algorithm in comparison with CPLEX solver in different problem instances. Finally, a sensitivity analysis is implemented on the main parameters to study the behavior of the objectives according to the real-world conditions.
Energy-Efficient Relay-Selection-Based Dynamic Routing Algorithm for IoT-Oriented Software-Defined WSNs In this article, a dynamic routing algorithm based on energy-efficient relay selection (RS), referred to as DRA-EERS, is proposed to adapt to the higher dynamics in time-varying software-defined wireless sensor networks (SDWSNs) for the Internet-of-Things (IoT) applications. First, the time-varying features of SDWSNs are investigated from which the state-transition probability (STP) of the node is calculated based on a Markov chain. Second, a dynamic link weight is designed for DRA-EERS by incorporating both the link reward and the link cost, where the link reward is related to the link energy efficiency (EE) and the node STP, while the link cost is affected by the locations of nodes. Moreover, one adjustable coefficient is used to balance the link reward and the link cost. Finally, the energy-efficient routing problem can be formulated as an optimization problem, and DRA-EERS is performed to find the best relay according to the energy-efficient RS criteria derived from the designed link weight. The simulation results demonstrate that the path EE obtained by DRA-EERS through an available coefficient adjustment outperforms that by Dijkstra's shortest path algorithm. Again, a tradeoff between the EE and the throughput can be achieved by adjusting the coefficient of the link weight, i.e., increasing the impact of the link reward to improve the EE, and otherwise, to improve the throughput.
Energy-Efficient Optimization for Wireless Information and Power Transfer in Large-Scale MIMO Systems Employing Energy Beamforming In this letter, we consider a large-scale multiple-input multiple-output (MIMO) system where the receiver should harvest energy from the transmitter by wireless power transfer to support its wireless information transmission. The energy beamforming in the large-scale MIMO system is utilized to address the challenging problem of long-distance wireless power transfer. Furthermore, considering the limitation of the power in such a system, this letter focuses on the maximization of the energy efficiency of information transmission (bit per Joule) while satisfying the quality-of-service (QoS) requirement, i.e. delay constraint, by jointly optimizing transfer duration and transmit power. By solving the optimization problem, we derive an energy-efficient resource allocation scheme. Numerical results validate the effectiveness of the proposed scheme.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
Mechanical design of a compact Serial Variable Stiffness Actuator (SVSA) based on lever mechanism Compliant actuator is widely accepted for physical human-robot interaction due to its safety aspect, dynamic performance improvements and energy saving abilities. In this paper, based on the variable ratio lever mechanism, a new kind of Serial Variable Stiffness Actuator (SVSA) is proposed by using an Archimedean Spiral Relocation Mechanism (ASRM) to change the position of the pivot, implementing large range of adjustable stiffness. The ASRM introduced here makes the SVSA design has continuous stiffness adjustment ability and simply mechanical structure. Within the Variable Stiffness Mechanism (VSM), two linear springs are assembled antagonistically on a spring shaft. Their displacements are perpendicular to the output link to transmit the spring force more efficiently. Stiffness modeling and analysis of the SVSA are carried out to cover large deflection angle. The physical implementation of the SVSA shows that the output stiffness of the VSM is changed from 1.72 to 150.56 Nm/rad using a linear spring with stiffness 1882 N/m, working range covered from 0 to 360°. Control experiments also proved the wide range of stiffness adjustment ability of the SVSA.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Hitting the right paraphrases in good time We present a random-walk-based approach to learning paraphrases from bilingual parallel corpora. The corpora are represented as a graph in which a node corresponds to a phrase, and an edge exists between two nodes if their corresponding phrases are aligned in a phrase table. We sample random walks to compute the average number of steps it takes to reach a ranking of paraphrases with better ones being "closer" to a phrase of interest. This approach allows "feature" nodes that represent domain knowledge to be built into the graph, and incorporates truncation techniques to prevent the graph from growing too large for efficiency. Current approaches, by contrast, implicitly presuppose the graph to be bipartite, are limited to finding paraphrases that are of length two away from a phrase, and do not generally permit easy incorporation of domain knowledge. Manual evaluation of generated output shows that our approach outperforms the state-of-the-art system of Callison-Burch (2008).
An intelligent analyzer and understander of English The paper describes a working analysis and generation program for natural language, which handles paragraph length input. Its core is a system of preferential choice between deep semantic patterns, based on what we call “semantic density.” The system is contrasted:with syntax oriented linguistic approaches, and with theorem proving approaches to the understanding problem.
Paraphrasing questions using given and new information The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser.
Docchat: An Information Retrieval Approach For Chatbot Engines Using Unstructured Documents Most current chatbot engines are designed to reply to user utterances based on existing utterance-response (or Q-R)1 pairs. In this paper, we present DocChat, a novel information retrieval approach for chatbot engines that can leverage unstructured documents, instead of Q-R pairs, to respond to utterances. A learning to rank model with features designed at different levels of granularity is proposed to measure the relevance between utterances and responses directly. We evaluate our proposed approach in both English and Chinese: (i) For English, we evaluate DocChat on WikiQA and QASent, two answer sentence selection tasks, and compare it with state-of-the-art methods. Reasonable improvements and good adaptability are observed. (ii) For Chinese, we compare DocChat with Xiaolce(2), a famous chitchat engine in China, and side-by-side evaluation shows that DocChat is a perfect complement for chatbot engines using Q-R pairs as main source of responses.
ParaBank: Monolingual Bitext Generation and Sentential Paraphrasing via Lexically-constrained Neural Machine Translation. We present PARABANK, a large-scale English paraphrase dataset that surpasses prior work in both quantity and quality. Following the approach of PARANMT (Wieting and Gimpel, 2018), we train a Czech-English neural machine translation (NMT) system to generate novel paraphrases of English reference sentences. By adding lexical constraints to the NMT decoding procedure, however, we are able to produce multiple high-quality sentential paraphrases per source sentence, yielding an English paraphrase resource with more than 4 billion generated tokens and exhibiting greater lexical diversity. Using human judgments, we also demonstrate that PARABANK's paraphrases improve over PARANMT on both semantic similarity and fluency. Finally, we use PARABANK to train a monolingual NMT model with the same support for lexically-constrained decoding for sentence rewriting tasks.
Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation. Story composition is a challenging problem for machines and even for humans. We present a neural narrative generation system that interacts with humans to generate stories. Our system has different levels of human interaction, which enables us to understand at what stage of story-writing human collaboration is most productive, both to improving story quality and human engagement in the writing process. We compare different varieties of interaction in story-writing, story-planning, and diversity controls under time constraints, and show that increased types of human collaboration at both planning and writing stages results in a 10-50% improvement in story quality as compared to less interactive baselines. We also show an accompanying increase in user engagement and satisfaction with stories as compared to our own less interactive systems and to previous turn-taking approaches to interaction. Finally, we find that humans tasked with collaboratively improving a particular characteristic of a story are in fact able to do so, which has implications for future uses of human-in-the-loop systems.
Generating Sentences from a Continuous Space. The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.
On the ratio of optimal integral and fractional covers It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d , where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.
Task Offloading in Vehicular Edge Computing Networks: A Load-Balancing Solution Recently, the rapid advance of vehicular networks has led to the emergence of diverse delay-sensitive vehicular applications such as automatic driving, auto navigation. Note that existing resource-constrained vehicles cannot adequately meet these demands on low / ultra-low latency. By offloading parts of the vehicles’ compute-intensive tasks to the edge servers in proximity, mobile edge computing is envisioned as a promising paradigm, giving rise to the vehicular edge computing networks (VECNs). However, most existing works on task offloading in VECNs did not take the load balancing of the computation resources at the edge servers into account. To address these issues and given the high dynamics of vehicular networks, we introduce fiber-wireless (FiWi) technology to enhance VECNs, due to its advantages on centralized network management and supporting multiple communication techniques. Aiming to minimize the processing delay of the vehicles’ computation tasks, we propose a software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs, where SDN is introduced to provide supports for the centralized network and vehicle information management. Extensive analysis and numerical results corroborate that our proposed load-balancing scheme can achieve superior performance on processing delay reduction by utilizing the edge servers’ computation resources more efficiently.
Trust in Automation: Designing for Appropriate Reliance. Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.
An evaluation of direct attacks using fake fingers generated from ISO templates This work reports a vulnerability evaluation of a highly competitive ISO matcher to direct attacks carried out with fake fingers generated from ISO templates. Experiments are carried out on a fingerprint database acquired in a real-life scenario and show that the evaluated system is highly vulnerable to the proposed attack scheme, granting access in over 75% of the attempts (for a high-security operating point). Thus, the study disproves the popular belief of minutiae templates non-reversibility and raises a key vulnerability issue in the use of non-encrypted standard templates. (This article is an extended version of Galbally et al., 2008, which was awarded with the IBM Best Student Paper Award in the track of Biometrics at ICPR 2008).
A Framework of Joint Mobile Energy Replenishment and Data Gathering in Wireless Rechargeable Sensor Networks Recent years have witnessed the rapid development and proliferation of techniques on improving energy efficiency for wireless sensor networks. Although these techniques can relieve the energy constraint on wireless sensors to some extent, the lifetime of wireless sensor networks is still limited by sensor batteries. Recent studies have shown that energy rechargeable sensors have the potential to provide perpetual network operations by capturing renewable energy from external environments. However, the low output of energy capturing devices can only provide intermittent recharging opportunities to support low-rate data services due to spatial-temporal, geographical or environmental factors. To provide steady and high recharging rates and achieve energy efficient data gathering from sensors, in this paper, we propose to utilize mobility for joint energy replenishment and data gathering. In particular, a multi-functional mobile entity, called SenCarin this paper, is employed, which serves not only as a mobile data collector that roams over the field to gather data via short-range communication but also as an energy transporter that charges static sensors on its migration tour via wireless energy transmissions. Taking advantages of SenCar's controlled mobility, we focus on the joint optimization of effective energy charging and high-performance data collections. We first study this problem in general networks with random topologies. We give a two-step approach for the joint design. In the first step, the locations of a subset of sensors are periodically selected as anchor points, where the SenCar will sequentially visit to charge the sensors at these locations and gather data from nearby sensors in a multi-hop fashion. To achieve a desirable balance between energy replenishment amount and data gathering latency, we provide a selection algorithm to search for a maximum number of anchor points where sensors hold the least battery energy, and meanwhile by visiting them, - he tour length of the SenCar is no more than a threshold. In the second step, we consider data gathering performance when the SenCar migrates among these anchor points. We formulate the problem into a network utility maximization problem and propose a distributed algorithm to adjust data rates at which sensors send buffered data to the SenCar, link scheduling and flow routing so as to adapt to the up-to-date energy replenishing status of sensors. Besides general networks, we also study a special scenario where sensors are regularly deployed. For this case we can provide a simplified solution of lower complexity by exploiting the symmetry of the topology. Finally, we validate the effectiveness of our approaches by extensive numerical results, which show that our solutions can achieve perpetual network operations and provide high network utility.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
0
A multi-step outlier-based anomaly detection approach to network-wide traffic. We propose a multi-step outlier-based anomaly detection approach to network-wide traffic.We propose a feature selection algorithm to select relevant non-redundant subset of features.We propose a tree-based clustering algorithm to generate non-redundant overlapped clusters.We introduce an efficient score-based outlier estimation technique to detect anomalies in network-wide traffic.We establish a fast distributed feature extraction framework to extract significant features from raw network-wide traffic.We conduct extensive experiments using the proposed algorithms with synthetic and real-life network-wide traffic datasets. Outlier detection is of considerable interest in fields such as physical sciences, medical diagnosis, surveillance detection, fraud detection and network anomaly detection. The data mining and network management research communities are interested in improving existing score-based network traffic anomaly detection techniques because of ample scopes to increase performance. In this paper, we present a multi-step outlier-based approach for detection of anomalies in network-wide traffic. We identify a subset of relevant traffic features and use it during clustering and anomaly detection. To support outlier-based network anomaly identification, we use the following modules: a mutual information and generalized entropy based feature selection technique to select a relevant non-redundant subset of features, a tree-based clustering technique to generate a set of reference points and an outlier score function to rank incoming network traffic to identify anomalies. We also design a fast distributed feature extraction and data preparation framework to extract features from raw network-wide traffic. We evaluate our approach in terms of detection rate, false positive rate, precision, recall and F-measure using several high dimensional synthetic and real-world datasets and find the performance superior in comparison to competing algorithms.
Network anomaly detection using IP flows with Principal Component Analysis and Ant Colony Optimization. It is remarkable how proactive network management is in such demand nowadays, since networks are growing in size and complexity and Information Technology services cannot be stopped. In this manner, it is necessary to use an approach which proactively identifies traffic behavior patterns which may harm the network's normal operations. Aiming an automated management to detect and prevent potential problems, we present and compare two novel anomaly detection mechanisms based on statistical procedure Principal Component Analysis and the Ant Colony Optimization metaheuristic. These methods generate a traffic profile, called Digital Signature of Network Segment using Flow analysis (DSNSF), which is adopted as normal network behavior. Then, this signature is compared with the real network traffic by using a modification of the Dynamic Time Warping metric in order to recognize anomalous events. Thus, a seven-dimensional analysis of IP flows is performed, allowing the characterization of bits, packets and flows traffic transmitted per second, and the extraction of descriptive flow attributes, like source IP address, destination IP address, source TCP/UDP port and destination TCP/UDP port. The systems were evaluated using a real network environment and showed promising results. Moreover, the correspondence between true-positive and false-positive rates demonstrates that the systems are able to enhance the detection of anomalous behavior by maintaining a satisfactory false-alarm rate. Display Omitted Anomaly detection issue is addressed based on network traffic profiling.Proposal and comparison of detection methods belonging to distinct algorithm classes.Detection mechanism constructed over an adaptation of a pattern matching technique.Use of real and simulated traffic to evaluate the proposed methods.Traffic patterns that may harm the network operations are proactively identified.
Deep Anomaly Detection with Deviation Networks Although deep learning has been applied to successfully address many data mining problems, relatively limited work has been done on deep learning for anomaly detection. Existing deep anomaly detection methods, which focus on learning new feature representations to enable downstream anomaly detection methods, perform indirect optimization of anomaly scores, leading to data-inefficient learning and suboptimal anomaly scoring. Also, they are typically designed as unsupervised learning due to the lack of large-scale labeled anomaly data. As a result, they are difficult to leverage prior knowledge (e.g., a few labeled anomalies) when such information is available as in many real-world anomaly detection applications. This paper introduces a novel anomaly detection framework and its instantiation to address these problems. Instead of representation learning, our method fulfills an end-to-end learning of anomaly scores by a neural deviation learning, in which we leverage a few (e.g., multiple to dozens) labeled anomalies and a prior probability to enforce statistically significant deviations of the anomaly scores of anomalies from that of normal data objects in the upper tail. Extensive results show that our method can be trained substantially more data-efficiently and achieves significantly better anomaly scoring than state-of-the-art competing methods.
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrall...
A Distributed NWDAF Architecture for Federated Learning in 5G For network automation and intelligence in 5G, the network data analytics function (NWDAF) has been introduced as a new network function. However, the existing centralized NWDAF structure can be overloaded if an amount of analytic data are concentrated. In this paper, we introduce a distributed NWDAF structure tailored for federated learning (FL) in 5G. Leaf NWDAFs create local models and root NWD...
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
A Low-Complexity Analytical Modeling for Cross-Layer Adaptive Error Protection in Video Over WLAN We find a low-complicity and accurate model to solve the problem of optimizing MAC-layer transmission of real-time video over wireless local area networks (WLANs) using cross-layer techniques. The objective in this problem is to obtain the optimal MAC retry limit in order to minimize the total packet loss rate. First, the accuracy of Fluid and M/M/1/K analytical models is examined. Then we derive a closed-form expression for service time in WLAN MAC transmission, and will use this in mathematical formulation of our optimization problem based on M/G/1 model. Subsequently we introduce an approximate and simple formula for MAC-layer service time, which leads to the M/M/1 model. Compared with M/G/1, we particularly show that our M/M/1-based model provides a low-complexity and yet quite accurate means for analyzing MAC transmission process in WLAN. Using our M/M/1 model-based analysis, we derive closed-form formulas for the packet overflow drop rate and optimum retry-limit. These closed-form expressions can be effectively invoked for analyzing adaptive retry-limit algorithms. Simulation results (network simulator-2) will verify the accuracy of our analytical models.
Semantic Image Synthesis With Spatially-Adaptive Normalization We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal as the normalization layers tend to "wash away" semantic information. To address the issue, we propose using the input layout. for modulating the activations in normalization layers through a spatially-adaptive,learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method over existing approaches, regarding both visual fidelity and align-ment with input layouts. Finally, our model allows user control over both semantic and style as synthesizing images.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Implementing Vehicle Routing Algorithms
Finite-approximation-error-based discrete-time iterative adaptive dynamic programming. In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The ...
An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation. •The proposed watermarking scheme utilized improved discrete wavelet transformation (IDWT) to retrieve the invariant wavelet domain.•The entropy mechanism is used to identify the suitable region for insertion of watermark. This will improve the imperceptibility and robustness of the watermarking procedure.•The scaling factors such as PSNR and NC are considered for evaluation of the proposed method and the Particle Swarm Optimization is employed to optimize the scaling factors.
A Hierarchical Architecture Using Biased Min-Consensus for USV Path Planning This paper proposes a hierarchical architecture using the biased min-consensus (BMC) method, to solve the path planning problem of unmanned surface vessel (USV). We take the fixed-point monitoring mission as an example, where a series of intermediate monitoring points should be visited once by USV. The whole framework incorporates the low-level layer planning the standard path between any two intermediate points, and the high-level fashion determining their visiting sequence. First, the optimal standard path in terms of voyage time and risk measure is planned by the BMC protocol, given that the corresponding graph is constructed with node state and edge weight. The USV will avoid obstacles or keep a certain distance safely, and arrive at the target point quickly. It is proven theoretically that the state of the graph will converge to be stable after finite iterations, i.e., the optimal solution can be found by BMC with low calculation complexity. Second, by incorporating the constraint of intermediate points, their visiting sequence is optimized by BMC again with the reconstruction of a new virtual graph based on the former planned results. The extensive simulation results in various scenarios also validate the feasibility and effectiveness of our method for autonomous navigation.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Near optimal bounded route association for drone-enabled rechargeable WSNs. This paper considers the multi-drone wireless charging scheme in large scale wireless sensor networks, where sensors can be charged by the charging drone with wireless energy transfer. As existing studies rarely focus on the route association issue with limited energy capacity, we consider this fundamental issue and study how to optimize the route association to maximize the overall charging coverage utility, when charging routes and associated nodes should be jointly selected. We first formulate a bounded route association problem which is proven to be NP-hard. Then we cast it as maximizing a monotone submodular function subject to matroid constraints and devise an efficient and accessible algorithm with a 13α approximation ratio, where α is the bound of Fully Polynomial-Time Approximation Scheme (FPTAS) solving a knapsack problem. Extensive numerical evaluations and trace-driven evaluations have been carried out to validate our theoretical effect, and the results show that our algorithm has near-optimal performance covering at most 85.3% and 70.5% of the surrogate-optimal solution achieved by CPLEX toolbox, respectively.
Mobility in wireless sensor networks - Survey and proposal. Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time.
Tag-based cooperative data gathering and energy recharging in wide area RFID sensor networks The Wireless Identification and Sensing Platform (WISP) conjugates the identification potential of the RFID technology and the sensing and computing capability of the wireless sensors. Practical issues, such as the need of periodically recharging WISPs, challenge the effective deployment of large-scale RFID sensor networks (RSNs) consisting of RFID readers and WISP nodes. In this view, the paper proposes cooperative solutions to energize the WISP devices in a wide-area sensing network while reducing the data collection delay. The main novelty is the fact that both data transmissions and energy transfer are based on the RFID technology only: RFID mobile readers gather data from the WISP devices, wirelessly recharge them, and mutually cooperate to reduce the data delivery delay to the sink. Communication between mobile readers relies on two proposed solutions: a tag-based relay scheme, where RFID tags are exploited to temporarily store sensed data at pre-determined contact points between the readers; and a tag-based data channel scheme, where the WISPs are used as a virtual communication channel for real time data transfer between the readers. Both solutions require: (i) clustering the WISP nodes; (ii) dimensioning the number of required RFID mobile readers; (iii) planning the tour of the readers under the energy and time constraints of the nodes. A simulative analysis demonstrates the effectiveness of the proposed solutions when compared to non-cooperative approaches. Differently from classic schemes in the literature, the solutions proposed in this paper better cope with scalability issues, which is of utmost importance for wide area networks.
Improving charging capacity for wireless sensor networks by deploying one mobile vehicle with multiple removable chargers. Wireless energy transfer is a promising technology to prolong the lifetime of wireless sensor networks (WSNs), by employing charging vehicles to replenish energy to lifetime-critical sensors. Existing studies on sensor charging assumed that one or multiple charging vehicles being deployed. Such an assumption may have its limitation for a real sensor network. On one hand, it usually is insufficient to employ just one vehicle to charge many sensors in a large-scale sensor network due to the limited charging capacity of the vehicle or energy expirations of some sensors prior to the arrival of the charging vehicle. On the other hand, although the employment of multiple vehicles can significantly improve the charging capability, it is too costly in terms of the initial investment and maintenance costs on these vehicles. In this paper, we propose a novel charging model that a charging vehicle can carry multiple low-cost removable chargers and each charger is powered by a portable high-volume battery. When there are energy-critical sensors to be charged, the vehicle can carry the chargers to charge multiple sensors simultaneously, by placing one portable charger in the vicinity of one sensor. Under this novel charging model, we study the scheduling problem of the charging vehicle so that both the dead duration of sensors and the total travel distance of the mobile vehicle per tour are minimized. Since this problem is NP-hard, we instead propose a (3+ϵ)-approximation algorithm if the residual lifetime of each sensor can be ignored; otherwise, we devise a novel heuristic algorithm, where ϵ is a given constant with 0 < ϵ ≤ 1. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Experimental results show that the performance of the proposed algorithms are very promising.
Speed control of mobile chargers serving wireless rechargeable networks. Wireless rechargeable networks have attracted increasing research attention in recent years. For charging service, a mobile charger is often employed to move across the network and charge all network nodes. To reduce the charging completion time, most existing works have used the “move-then-charge” model where the charger first moves to specific spots and then starts charging nodes nearby. As a result, these works often aim to reduce the moving delay or charging delay at the spots. However, the charging opportunity on the move is largely overlooked because the charger can charge network nodes while moving, which as we analyze in this paper, has the potential to greatly reduce the charging completion time. The major challenge to exploit the charging opportunity is the setting of the moving speed of the charger. When the charger moves slow, the charging delay will be reduced (more energy will be charged during the movement) but the moving delay will increase. To deal with this challenge, we formulate the problem of delay minimization as a Traveling Salesman Problem with Speed Variations (TSP-SV) which jointly considers both charging and moving delay. We further solve the problem using linear programming to generate (1) the moving path of the charger, (2) the moving speed variations on the path and (3) the stay time at each charging spot. We also discuss possible ways to reduce the calculation complexity. Extensive simulation experiments are conducted to study the delay performance under various scenarios. The results demonstrate that our proposed method achieves much less completion time compared to the state-of-the-art work.
Beamforming in Wireless Energy Harvesting Communications Systems: A Survey. Wireless energy harvesting (EH) is a promising solution to prolong lifetime of power-constrained networks such as military and sensor networks. The high sensitivity of energy transfer to signal decay due to path loss and fading, promotes multi-antenna techniques like beamforming as the candidate transmission scheme for EH networks. Exploiting beamforming in EH networks has gained overwhelming inte...
Coverage and Connectivity Aware Energy Charging Mechanism Using Mobile Charger for WRSNs Wireless recharging using a mobile charger has been widely discussed in recent years. Most of them considered that all sensors were equally important and aimed to maximize the number of recharged sensors. The purpose of energy recharging is to extend the lifetime of sensors whose major work is to maximize the surveillance quality. In a randomly deployed wireless rechargeable sensor network, the surveillance quality highly depends on the contributions of coverage and network connectivity of each sensor. Instead of considering maximizing the number of recharged sensors, this article further takes into consideration the contributions of coverage and network connectivity of each sensor when making the decision of recharging schedule, aiming to maximize the surveillance quality and improve the number of data collected from sensors to the sink node. This article proposes an energy recharging mechanism, called an energy recharging mechanism for maximizing the surveillance quality of a given WRSNs (ERSQ), which partitions the monitoring region into several equal-sized grids and considers the important factors, including coverage contribution, network connectivity contribution, the remaining energy as well as the path length cost of each grid, aiming to maximize surveillance quality for a given wireless sensor network. Performance studies reveal that the proposed ERSQ outperforms existing recharging mechanisms in terms of the coverage, the number of working sensors as well as the effectiveness index of working sensors.
Minimizing the Maximum Charging Delay of Multiple Mobile Chargers Under the Multi-Node Energy Charging Scheme Wireless energy charging has emerged as a very promising technology for prolonging sensor lifetime in wireless rechargeable sensor networks (WRSNs). Existing studies focused mainly on the one-to-one charging scheme that a single sensor can be charged by a mobile charger at each time, this charging scheme however suffers from poor charging scalability and inefficiency. Recently, another charging scheme, the multi-node charging scheme that allows multiple sensors to be charged simultaneously by a mobile charger, becomes dominant, which can mitigate charging scalability and improve charging efficiency. However, most previous studies on this multi-node energy charging scheme focused on the use of a single mobile charger to charge multiple sensors simultaneously. For large scale WRSNs, it is insufficient to deploy only a single mobile charger to charge many lifetime-critical sensors, and consequently sensor expiration durations will increase dramatically. To charge many lifetime-critical sensors in large scale WRSNs as early as possible, it is inevitable to adopt multiple mobile chargers for sensor charging that can not only speed up sensor charging but also reduce expiration times of sensors. This however poses great challenges to fairly schedule the multiple mobile chargers such that the longest charging delay among sensors is minimized. One important constraint is that no sensor can be charged by more than one mobile charger at any time due to the fact that the sensor cannot receive any energy from either of the chargers or the overcharging will damage the recharging battery of the sensor. Thus, finding a closed charge tour for each of the multiple chargers such that the longest charging delay is minimized is crucial. In this paper we address the challenge by formulating a novel longest charging delay minimization problem. We first show that the problem is NP-hard. We then devise the very first approximation algorithm with a provable approximation ratio for the problem. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithm is promising, and outperforms existing algorithms in various settings.
NETWRAP: An NDN Based Real-TimeWireless Recharging Framework for Wireless Sensor Networks Using vehicles equipped with wireless energy transmission technology to recharge sensor nodes over the air is a game-changer for traditional wireless sensor networks. The recharging policy regarding when to recharge which sensor nodes critically impacts the network performance. So far only a few works have studied such recharging policy for the case of using a single vehicle. In this paper, we propose NETWRAP, an N DN based Real Time Wireless Rech arging Protocol for dynamic wireless recharging in sensor networks. The real-time recharging framework supports single or multiple mobile vehicles. Employing multiple mobile vehicles provides more scalability and robustness. To efficiently deliver sensor energy status information to vehicles in real-time, we leverage concepts and mechanisms from named data networking (NDN) and design energy monitoring and reporting protocols. We derive theoretical results on the energy neutral condition and the minimum number of mobile vehicles required for perpetual network operations. Then we study how to minimize the total traveling cost of vehicles while guaranteeing all the sensor nodes can be recharged before their batteries deplete. We formulate the recharge optimization problem into a Multiple Traveling Salesman Problem with Deadlines (m-TSP with Deadlines), which is NP-hard. To accommodate the dynamic nature of node energy conditions with low overhead, we present an algorithm that selects the node with the minimum weighted sum of traveling time and residual lifetime. Our scheme not only improves network scalability but also ensures the perpetual operation of networks. Extensive simulation results demonstrate the effectiveness and efficiency of the proposed design. The results also validate the correctness of the theoretical analysis and show significant improvements that cut the number of nonfunctional nodes by half compared to the static scheme while maintaining the network overhead at the same level.
Hierarchical mesh segmentation based on fitting primitives In this paper, we describe a hierarchical face clustering algorithm for triangle meshes based on fitting primitives belonging to an arbitrary set. The method proposed is completely automatic, and generates a binary tree of clusters, each of which is fitted by one of the primitives employed. Initially, each triangle represents a single cluster; at every iteration, all the pairs of adjacent clusters are considered, and the one that can be better approximated by one of the primitives forms a new single cluster. The approximation error is evaluated using the same metric for all the primitives, so that it makes sense to choose which is the most suitable primitive to approximate the set of triangles in a cluster.Based on this approach, we have implemented a prototype that uses planes, spheres and cylinders, and have experimented that for meshes made of 100 K faces, the whole binary tree of clusters can be built in about 8 s on a standard PC.The framework described here has natural application in reverse engineering processes, but it has also been tested for surface denoising, feature recovery and character skinning.
Movie2Comics: Towards a Lively Video Content Presentation a type of artwork, comics is prevalent and popular around the world. However, despite the availability of assistive software and tools, the creation of comics is still a labor-intensive and time-consuming process. This paper proposes a scheme that is able to automatically turn a movie clip to comics. Two principles are followed in the scheme: 1) optimizing the information preservation of the movie; and 2) generating outputs following the rules and the styles of comics. The scheme mainly contains three components: script-face mapping, descriptive picture extraction, and cartoonization. The script-face mapping utilizes face tracking and recognition techniques to accomplish the mapping between characters' faces and their scripts. The descriptive picture extraction then generates a sequence of frames for presentation. Finally, the cartoonization is accomplished via three steps: panel scaling, stylization, and comics layout design. Experiments are conducted on a set of movie clips and the results have demonstrated the usefulness and the effectiveness of the scheme.
Parallel Multi-Block ADMM with o(1/k) Convergence This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: $$\\begin{aligned} \\text {minimize} ~~&f_1(\\mathbf{x}_1) + \\cdots + f_N(\\mathbf{x}_N)\\\\ \\text {subject to}~~&A_1 \\mathbf{x}_1 ~+ \\cdots + A_N\\mathbf{x}_N =c,\\\\&\\mathbf{x}_1\\in {\\mathcal {X}}_1,~\\ldots , ~\\mathbf{x}_N\\in {\\mathcal {X}}_N, \\end{aligned}$$minimizef1(x1)+ź+fN(xN)subject toA1x1+ź+ANxN=c,x1źX1,ź,xNźXN,where $$N \\ge 2$$Nź2, $$f_i$$fi are convex functions, $$A_i$$Ai are matrices, and $${\\mathcal {X}}_i$$Xi are feasible sets for variable $$\\mathbf{x}_i$$xi. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices $$A_i$$Ai are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices $$A_i$$Ai). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that $$\\Vert {\\mathbf {x}}^{k+1} - {\\mathbf {x}}^k\\Vert _M^2$$źxk+1-xkźM2 converges at a rate of o(1 / k) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with 300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.
Deep Continuous Fusion For Multi-Sensor 3d Object Detection In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
Stochastic QoE-aware optimization of multisource multimedia content delivery for mobile cloud The increasing popularity of mobile video streaming in wireless networks has stimulated growing demands for efficient video streaming services. However, due to the time-varying throughput and user mobility, it is still difficult to provide high quality video services for mobile users. Our proposed optimization method considers key factors such as video quality, bitrate level, and quality variations to enhance quality of experience over wireless networks. The mobile network and device parameters are estimated in order to deliver the best quality video for the mobile user. We develop a rate adaptation algorithm using Lyapunov optimization for multi-source multimedia content delivery to minimize the video rate switches and provide higher video quality. The multi-source manager algorithm is developed to select the best stream based on the path quality for each path. The node joining and cluster head election mechanism update the node information. As the proposed approach selects the optimal path, it also achieves fairness and stability among clients. The quality of experience feature metrics like bitrate level, rebuffering events, and bitrate switch frequency are employed to assess video quality. We also employ objective video quality assessment methods like VQM, MS-SSIM, and SSIMplus for video quality measurement closer to human visual assessment. Numerical results show the effectiveness of the proposed method as compared to the existing state-of-the-art methods in providing quality of experience and bandwidth utilization.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
End-To-End Time-Lapse Video Synthesis From A Single Outdoor Image Time-lapse videos usually contain visually appealing content but are often difficult and costly to create. In this paper, we present an end-to-end solution to synthesize a time-lapse video from a single outdoor image using deep neural networks. Our key idea is to train a conditional generative adversarial network based on existing datasets of time-lapse videos and image sequences. We propose a multi-frame joint conditional generation framework to effectively learn the correlation between the illumination change of an outdoor scene and the time of the day. We further present a multi-domain training scheme for robust training of our generative models from two datasets with different distributions and missing timestamp labels. Compared to alternative time-lapse video synthesis algorithms, our method uses the timestamp as the control variable and does not require a reference video to guide the synthesis of the final output. We conduct ablation studies to validate our algorithm and compare with state-of-the-art techniques both qualitatively and quantitatively.
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Transient attributes for high-level understanding and editing of outdoor scenes We live in a dynamic visual world where the appearance of scenes changes dramatically from hour to hour or season to season. In this work we study \"transient scene attributes\" -- high level properties which affect scene appearance, such as \"snow\", \"autumn\", \"dusk\", \"fog\". We define 40 transient attributes and use crowdsourcing to annotate thousands of images from 101 webcams. We use this \"transient attribute database\" to train regressors that can predict the presence of attributes in novel images. We demonstrate a photo organization method based on predicted attributes. Finally we propose a high-level image editing method which allows a user to adjust the attributes of a scene, e.g. change a scene to be \"snowy\" or \"sunset\". To support attribute manipulation we introduce a novel appearance transfer technique which is simple and fast yet competitive with the state-of-the-art. We show that we can convincingly modify many transient attributes in outdoor scenes.
Semantic Understanding of Scenes through the ADE20K Dataset. Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement state-of-the-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects.
Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures. This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW). Sync-DRAW can also perform text-to-video generation which, to the best of our knowledge, makes it the first approach of its kind. It combines a Variational Autoencoder(VAE) with a Recurrent Attention Mechanism in a novel manner to create a temporally dependent sequence of frames that are gradually formed over time. The recurrent attention mechanism in Sync-DRAW attends to each individual frame of the video in sychronization, while the VAE learns a latent distribution for the entire video at the global level. Our experiments with Bouncing MNIST, KTH and UCF-101 suggest that Sync-DRAW is efficient in learning the spatial and temporal information of the videos and generates frames with high structural integrity, and can generate videos from simple captions on these datasets.
Dynamic Facial Expression Generation on Hilbert Hypersphere With Conditional Wasserstein Generative Adversarial Nets In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, w...
Cross-MPI: Cross-scale Stereo for Image Super-Resolution using Multiplane Images Various combinations of cameras enrich computational photography, among which reference-based superresolution (RefSR) plays a critical role in multiscale imaging systems. However, existing RefSR approaches fail to accomplish high-fidelity super-resolution under a large resolution gap, e.g., 8x upscaling, due to the lower consideration of the underlying scene structure. In this paper, we aim to solve the RefSR problem in actual multiscale camera systems inspired by multiplane image (MPI) representation. Specifically, we propose Cross-MPI, an end-to-end RefSR network composed of a novel plane-aware attention-based MPI mechanism, a multiscale guided upsampling module as well as a super-resolution (SR) synthesis and fusion module. Instead of using a direct and exhaustive matching between the cross-scale stereo, the proposed plane-aware attention mechanism fully utilizes the concealed scene structure for efficient attention-based correspondence searching. Further combined with a gentle coarse-to-fine guided upsampling strategy, the proposed Cross-MPI can achieve a robust and accurate detail transmission. Experimental results on both digitally synthesized and optical zoom cross-scale data show that the Cross-MPI framework can achieve superior performance against the existing RefSR methods and is a real fit for actual multiscale camera systems even with large-scale differences.
Enhanced Pix2pix Dehazing Network In this paper we reduce the image dehazing problem to an image-to-image translationproblem, and propose Enhanced Pix2pix Dehazing Network (EPDN), which generates a haze-free image without relying on the physical scattering model. EPDN is embedded by a generative adversarial network, which isfollowed by a well-designed enhancer Inspiredby visualperception global-first[5] theory,the discriminatorguides the generatorto create apseudo realistic image on a coarse scale, while the enhancerfollowing the generator is required to produce a realistic dehazing image on the fine scale. The enhancer contains two enhancing blocks based on the receptivefield model, which reinforces the dehazing effect in both color and details. The embedded GAN is jointly trainedwith the enhancer Extensive experiment results on synthetic datasets and real-world datasets show that the proposed EPDN is superior to the state-ofthe-art methods in terms of PSNR, SSIM, PI, and subjective visual effect.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Witness indistinguishable and witness hiding protocols
Multimodal graph-based reranking for web image search. This paper introduces a web image search reranking approach that explores multiple modalities in a graph-based learning scheme. Different from the conventional methods that usually adopt a single modality or integrate multiple modalities into a long feature vector, our approach can effectively integrate the learning of relevance scores, weights of modalities, and the distance metric and its scaling for each modality into a unified scheme. In this way, the effects of different modalities can be adaptively modulated and better reranking performance can be achieved. We conduct experiments on a large dataset that contains more than 1000 queries and 1 million images to evaluate our approach. Experimental results demonstrate that the proposed reranking approach is more robust than using each individual modality, and it also performs better than many existing methods.
A distributed event-triggered transmission strategy for sampled-data consensus of multi-agent systems. This paper is concerned with event-triggered sampled-data consensus for distributed multi-agent systems with directed graph. A novel distributed event-triggered sampled-data transmission strategy is proposed, which allows the event-triggering condition to be intermittently examined at constant sampling instants. Based on this novel strategy, a sampled-data consensus control protocol is presented, with which the consensus of distributed multi-agent systems can be transformed into the stability of a system with a time-varying delay. Then, a sufficient condition on the consensus of the multi-agent system is derived. Correspondingly, a co-design algorithm for obtaining both the parameters of the distributed event-triggered transmission strategy and the consensus controller gain is proposed. Two numerical examples are given to show the effectiveness of the proposed method.
Driver Gaze Zone Estimation Using Convolutional Neural Networks: A General Framework and Ablative Analysis Driver gaze has been shown to be an excellent surrogate for driver attention in intelligent vehicles. With the recent surge of highly autonomous vehicles, driver gaze can be useful for determining the handoff time to a human driver. While there has been significant improvement in personalized driver gaze zone estimation systems, a generalized system which is invariant to different subjects, perspe...
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.11
0.1
0.1
0.1
0.1
0.1
0.1
0.06
0.000312
0
0
0
0
0
A New Generation of Perspective API: Efficient Multilingual Character-level Transformers On the world wide web, toxic content detectors are a crucial line of defense against potentially hateful and offensive messages. As such, building highly effective classifiers that enable a safer internet is an important research area. Moreover, the web is a highly multilingual, cross-cultural community that develops its own lingo over time. As such, it is crucial to develop models that are effective across a diverse range of languages, usages, and styles. In this paper, we present the fundamentals behind the next version of the Perspective API from Google Jigsaw. At the heart of the approach is a single multilingual token-free Charformer model that is applicable across a range of languages, domains, and tasks. We demonstrate that by forgoing static vocabularies, we gain flexibility across a variety of settings. We additionally outline the techniques employed to make such a byte-level model efficient and feasible for productionization. Through extensive experiments on multilingual toxic comment classification benchmarks derived from real API traffic and evaluation on an array of code-switching, covert toxicity, emoji-based hate, human-readable obfuscation, distribution shift, and bias evaluation settings, we show that our proposed approach outperforms strong baselines. Finally, we present our findings from deploying this system in production.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Verification and Validation Methods for Decision-Making and Planning of Automated Vehicles: A Review Verification and validation (V&V) hold a significant position in the research and development of automated vehicles (AVs). Current literature indicates that different V&V techniques have been implemented in the decision-making and planning (DMP) system to improve AVs' safety, comfort, and energy optimization. This paper aims to review a range of different V&V approaches for the DMP system of AVs and divides these approaches into three distinct categories: scenario-based testing, fault injection testing, and formal verification. Further, scenario-based testing is categorized into fundamental and advanced approaches based on the interaction between road users in generated scenarios. In this paper, six criteria are proposed to compare and evaluate the characteristics of V&V approaches, which could help researchers gain insight into the benefits and limitations of the reviewed approaches and assist with approach choices. Next, the DMP system is broken down into a hierarchy of modules, and the functional requirements of each module are deduced. The suitable approaches are matched to verify and validate each module aiming at their different functional requirements. Finally, the current challenges and future research directions are concluded.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A spatio-temporal decomposition based deep neural network for time series forecasting Spatio-temporal problems arise in a broad range of applications, such as climate science and transportation systems. These problems are challenging because of unique spatial, short-term and long-term patterns, as well as the curse of dimensionality. In this paper, we propose a deep learning framework for spatio-temporal forecasting problems. We explicitly design the neural network architecture for capturing various types of spatial and temporal patterns, and the model is robust to missing data. In a preprocessing step, a time series decomposition method is applied to separately feed short-term, long-term and spatial patterns into different components of the neural network. A fuzzy clustering method finds clusters of neighboring time series residuals, as these contain short-term spatial patterns. The first component of the neural network consists of multi-kernel convolutional layers which are designed to extract short-term features from clusters of time series data. Each convolutional kernel receives a single cluster of input time series. The output of convolutional layers is concatenated by trends and followed by convolutional-LSTM layers to capture long-term spatial patterns. To have a robust forecasting model when faced with missing data, a pretrained denoising autoencoder reconstructs the model’s output in a fine-tuning step. In experimental results, we evaluate the performance of the proposed model for the traffic flow prediction. The results show that the proposed model outperforms baseline and state-of-the-art neural network models.
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Dest-ResNet: A Deep Spatiotemporal Residual Network for Hotspot Traffic Speed Prediction. With the ever-increasing urbanization process, the traffic jam has become a common problem in the metropolises around the world, making the traffic speed prediction a crucial and fundamental task. This task is difficult due to the dynamic and intrinsic complexity of the traffic environment in urban cities, yet the emergence of crowd map query data sheds new light on it. In general, a burst of crowd map queries for the same destination in a short duration (called "hotspot'') could lead to traffic congestion. For example, queries of the Capital Gym burst on weekend evenings lead to traffic jams around the gym. However, unleashing the power of crowd map queries is challenging due to the innate spatiotemporal characteristics of the crowd queries. To bridge the gap, this paper firstly discovers hotspots underlying crowd map queries. These discovered hotspots address the spatiotemporal variations. Then Dest-ResNet (Deep spatiotemporal Residual Network) is proposed for hotspot traffic speed prediction. Dest-ResNet is a sequence learning framework that jointly deals with two sequences in different modalities, i.e., the traffic speed sequence and the query sequence. The main idea of Dest-ResNet is to learn to explain and amend the errors caused when the unimodal information is applied individually. In this way, Dest-ResNet addresses the temporal causal correlation between queries and the traffic speed. As a result, Dest-ResNet shows a 30% relative boost over the state-of-the-art methods on real-world datasets from Baidu Map.
Impact of Data Loss for Prediction of Traffic Flow on an Urban Road Using Neural Networks The deployment of intelligent transport systems requires efficient means of assessing the traffic situation. This involves gathering real traffic data from the road network and predicting the evolution of traffic parameters, in many cases based on incomplete or false data from vehicle detectors. Traffic flows in the network follow spatiotemporal patterns and this characteristic is used to suppress the impact of missing or erroneous data. The application of multilayer perceptrons and deep learning networks using autoencoders for the prediction task is evaluated. Prediction sensitivity to false data is estimated using traffic data from an urban traffic network.
Short-Term Traffic Prediction Based on DeepCluster in Large-Scale Road Networks Short-term traffic prediction (STTP) is one of the most critical capabilities in Intelligent Transportation Systems (ITS), which can be used to support driving decisions, alleviate traffic congestion and improve transportation efficiency. However, STTP of large-scale road networks remains challenging due to the difficulties of effectively modeling the diverse traffic patterns by high-dimensional time series. Therefore, this paper proposes a framework that involves a deep clustering method for STTP in large-scale road networks. The deep clustering method is employed to supervise the representation learning in a visualized way from the large unlabeled dataset. More specifically, to fully exploit the traffic periodicity, the raw series is first divided into a number of sub-series for triplet generation. The convolutional neural networks (CNNs) with triplet loss are utilized to extract the features of shape by transforming the series into visual images. The shape-based representations are then used to cluster road segments into groups. Thereafter, a model sharing strategy is further proposed to build recurrent NNs-based predictions through group-based models (GBMs). GBM is built for a type of traffic patterns, instead of one road segment exclusively or all road segments uniformly. Our framework can not only significantly reduce the number of prediction models, but also improve their generalization by virtue of being trained on more diverse examples. Furthermore, the proposed framework over a selected road network in Beijing is evaluated. Experiment results show that the deep clustering method can effectively cluster the road segments and GBM can achieve comparable prediction accuracy against the IBM with less number of prediction models.
Discovering spatio-temporal causal interactions in traffic data streams The detection of outliers in spatio-temporal traffic data is an important research problem in the data mining and knowledge discovery community. However to the best of our knowledge, the discovery of relationships, especially causal interactions, among detected traffic outliers has not been investigated before. In this paper we propose algorithms which construct outlier causality trees based on temporal and spatial properties of detected outliers. Frequent substructures of these causality trees reveal not only recurring interactions among spatio-temporal outliers, but potential flaws in the design of existing traffic networks. The effectiveness and strength of our algorithms are validated by experiments on a very large volume of real taxi trajectories in an urban road network.
A new approach for dynamic fuzzy logic parameter tuning in Ant Colony Optimization and its application in fuzzy control of a mobile robot Central idea is to avoid or slow down full convergence through the dynamic variation of parameters.Performance of different ACO variants was observed to choose one as the basis to the proposed approach.Convergence fuzzy controller with the objective of maintaining diversity to avoid premature convergence was created. Ant Colony Optimization is a population-based meta-heuristic that exploits a form of past performance memory that is inspired by the foraging behavior of real ants. The behavior of the Ant Colony Optimization algorithm is highly dependent on the values defined for its parameters. Adaptation and parameter control are recurring themes in the field of bio-inspired optimization algorithms. The present paper explores a new fuzzy approach for diversity control in Ant Colony Optimization. The main idea is to avoid or slow down full convergence through the dynamic variation of a particular parameter. The performance of different variants of the Ant Colony Optimization algorithm is analyzed to choose one as the basis to the proposed approach. A convergence fuzzy logic controller with the objective of maintaining diversity at some level to avoid premature convergence is created. Encouraging results on several traveling salesman problem instances and its application to the design of fuzzy controllers, in particular the optimization of membership functions for a unicycle mobile robot trajectory control are presented with the proposed method.
Adaptive Navigation Support Adaptive navigation support is a specific group of technologies that support user navigation in hyperspace, by adapting to the goals, preferences and knowledge of the individual user. These technologies, originally developed in the field of adaptive hypermedia, are becoming increasingly important in several adaptive Web applications, ranging from Web-based adaptive hypermedia to adaptive virtual reality. This chapter provides a brief introduction to adaptive navigation support, reviews major adaptive navigation support technologies and mechanisms, and illustrates these with a range of examples.
Learning to Predict Driver Route and Destination Intent For many people, driving is a routine activity where people drive to the same destinations using the same routes on a regular basis. Many drivers, for example, will drive to and from work along a small set of routes, at about the same time every day of the working week. Similarly, although a person may shop on different days or at different times, they will often visit the same grocery store(s). In this paper, we present a novel approach to predicting driver intent that exploits the predictable nature of everyday driving. Our approach predicts a driver's intended route and destination through the use of a probabilistic model learned from observation of their driving habits. We show that by using a low-cost GPS sensor and a map database, it is possible to build a hidden Markov model (HMM) of the routes and destinations used by the driver. Furthermore, we show that this model can be used to make accurate predictions of the driver's destination and route through on-line observation of their GPS position during the trip. We present a thorough evaluation of our approach using a corpus of almost a month of real, everyday driving. Our results demonstrate the effectiveness of the approach, achieving approximately 98% accuracy in most cases. Such high performance suggests that the method can be harnessed for improved safety monitoring, route planning taking into account traffic density, and better trip duration prediction
A Minimal Set Of Coordinates For Describing Humanoid Shoulder Motion The kinematics of the anatomical shoulder are analysed and modelled as a parallel mechanism similar to a Stewart platform. A new method is proposed to describe the shoulder kinematics with minimal coordinates and solve the indeterminacy. The minimal coordinates are defined from bony landmarks and the scapulothoracic kinematic constraints. Independent from one another, they uniquely characterise the shoulder motion. A humanoid mechanism is then proposed with identical kinematic properties. It is then shown how minimal coordinates can be obtained for this mechanism and how the coordinates simplify both the motion-planning task and trajectory-tracking control. Lastly, the coordinates are also shown to have an application in the field of biomechanics where they can be used to model the scapulohumeral rhythm.
Massive MIMO Antenna Selection: Switching Architectures, Capacity Bounds, and Optimal Antenna Selection Algorithms. Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, w...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
A hybrid GSA-GA algorithm for constrained optimization problems. In this paper, a new hybrid GSA-GA algorithm is presented for the constraint nonlinear optimization problems with mixed variables. In it, firstly the solution of the algorithm is tuned up with the gravitational search algorithm and then each solution is upgraded with the genetic operators such as selection, crossover, mutation. The performance of the algorithm is tested on the several benchmark design problems with different nature of the objectives, constraints and the decision variables. The obtained results from the proposed approach are compared with the several existing approaches result and found to be very profitable. Finally, obtained results are verified with some statistical testing.
A study on the use of statistical tests for experimentation with neural networks: Analysis of parametric test conditions and non-parametric tests In this paper, we focus on the experimental analysis on the performance in artificial neural networks with the use of statistical tests on the classification task. Particularly, we have studied whether the sample of results from multiple trials obtained by conventional artificial neural networks and support vector machines checks the necessary conditions for being analyzed through parametrical tests. The study is conducted by considering three possibilities on classification experiments: random variation in the selection of test data, the selection of training data and internal randomness in the learning algorithm. The results obtained state that the fulfillment of these conditions are problem-dependent and indefinite, which justifies the need of using non-parametric statistics in the experimental analysis.
A Multi-Layered Immune System For Graph Planarization Problem This paper presents a new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems. The proposed methodology is composed of five layers. After expressing the problem as, a suitable representation in the first layer, the search space and the features of the problem are estimated and extracted in the second and third layers, respectively. Through taking advantage of the minimized search space from estimation and the heuristic information from extraction, the antibodies (or solutions) are evolved in the fourth layer and finally the fittest antibody is exported. In order to demonstrate the efficiency of the proposed system, the graph planarization problem is tested. Simulation results based on several benchmark instances show that the proposed algorithm performs better than traditional algorithms.
From evolutionary computation to the evolution of things Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.
Improving Dendritic Neuron Model With Dynamic Scale-Free Network-Based Differential Evolution Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification, prediction, and other problems when its parameters are well-tuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and e...
Recent Advances in Evolutionary Computation Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of “biological evolution” toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily “evolutionary”. This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
Energy-Optimized Partial Computation Offloading in Mobile-Edge Computing With Genetic Simulated-Annealing-Based Particle Swarm Optimization Smart mobile devices (SMDs) can meet users' high expectations by executing computational intensive applications but they only have limited resources, including CPU, memory, battery power, and wireless medium. To tackle this limitation, partial computation offloading can be used as a promising method to schedule some tasks of applications from resource-limited SMDs to high-performance edge servers. However, it brings communication overhead issues caused by limited bandwidth and inevitably increases the latency of tasks offloaded to edge servers. Therefore, it is highly challenging to achieve a balance between high-resource consumption in SMDs and high communication cost for providing energy-efficient and latency-low services to users. This work proposes a partial computation offloading method to minimize the total energy consumed by SMDs and edge servers by jointly optimizing the offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels, and transmission power of each SMD in each time slot. It jointly considers the execution time of tasks performed in SMDs and edge servers, and transmission time of data. It also jointly considers latency limits, CPU speeds, transmission power limits, available energy of SMDs, and the maximum number of CPU cycles and memories in edge servers. Considering these factors, a nonlinear constrained optimization problem is formulated and solved by a novel hybrid metaheuristic algorithm named genetic simulated annealing-based particle swarm optimization (GSP) to produce a close-to-optimal solution. GSP achieves joint optimization of computation offloading between a cloud data center and the edge, and resource allocation in the data center. Real-life data-based experimental results prove that it achieves lower energy consumption in less convergence time than its three typical peers.
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
Understanding Taxi Service Strategies From Taxi GPS Traces Taxi service strategies, as the crowd intelligence of massive taxi drivers, are hidden in their historical time-stamped GPS traces. Mining GPS traces to understand the service strategies of skilled taxi drivers can benefit the drivers themselves, passengers, and city planners in a number of ways. This paper intends to uncover the efficient and inefficient taxi service strategies based on a large-scale GPS historical database of approximately 7600 taxis over one year in a city in China. First, we separate the GPS traces of individual taxi drivers and link them with the revenue generated. Second, we investigate the taxi service strategies from three perspectives, namely, passenger-searching strategies, passenger-delivery strategies, and service-region preference. Finally, we represent the taxi service strategies with a feature matrix and evaluate the correlation between service strategies and revenue, informing which strategies are efficient or inefficient. We predict the revenue of taxi drivers based on their strategies and achieve a prediction residual as less as 2.35 RMB/h,1 which demonstrates that the extracted taxi service strategies with our proposed approach well characterize the driving behavior and performance of taxi drivers.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.1
0.033333
0
0
0
0
0
0
0
Predicting the Driver's Focus of Attention: the DR(eye)VE Project. In this work we aim to predict the driver&#39;s focus of attention. The goal is to estimate what a person would pay attention to while driving, and which part of the scene around the vehicle is more critical for the task. To this end we propose a new computer vision model based on a multi-branch deep architecture that integrates three sources of information: raw video, motion and scene semantics. We a...
Driver Gaze Zone Estimation Using Convolutional Neural Networks: A General Framework and Ablative Analysis Driver gaze has been shown to be an excellent surrogate for driver attention in intelligent vehicles. With the recent surge of highly autonomous vehicles, driver gaze can be useful for determining the handoff time to a human driver. While there has been significant improvement in personalized driver gaze zone estimation systems, a generalized system which is invariant to different subjects, perspe...
Analysing user physiological responses for affective video summarisation. Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.
Speech emotion recognition approaches in human computer interaction Speech Emotion Recognition (SER) represents one of the emerging fields in human-computer interaction. Quality of the human-computer interface that mimics human speech emotions relies heavily on the types of features used and also on the classifier employed for recognition. The main purpose of this paper is to present a wide range of features employed for speech emotion recognition and the acoustic characteristics of those features. Also in this paper, we analyze the performance in terms of some important parameters such as: precision, recall, F-measure and recognition rate of the features using two of the commonly used emotional speech databases namely Berlin emotional database and Danish emotional database. Emotional speech recognition is being applied in modern human-computer interfaces and the overview of 10 interesting applications is also presented in this paper to illustrate the importance of this technique.
Camera-based drowsiness reference for driver state classification under real driving conditions Experts assume that accidents caused by drowsiness are significantly under-reported in police crash investigations (1-3%). They estimate that about 24-33% of the severe accidents are related to drowsiness. In order to develop warning systems that detect reduced vigilance based on the driving behavior, a reliable and accurate drowsiness reference is needed. Studies have shown that measures of the driver's eyes are capable to detect drowsiness under simulator or experiment conditions. In this study, the performance of the latest eye tracking based in-vehicle fatigue prediction measures are evaluated. These measures are assessed statistically and by a classification method based on a large dataset of 90 hours of real road drives. The results show that eye-tracking drowsiness detection works well for some drivers as long as the blinks detection works properly. Even with some proposed improvements, however, there are still problems with bad light conditions and for persons wearing glasses. As a summary, the camera based sleepiness measures provide a valuable contribution for a drowsiness reference, but are not reliable enough to be the only reference.
Fully Automated Driving: Impact of Trust and Practice on Manual Control Recovery. Objective: An experiment was performed in a driving simulator to investigate the impacts of practice, trust, and interaction on manual control recovery (MCR) when employing fully automated driving (FAD). Background: To increase the use of partially or highly automated driving efficiency and to improve safety, some studies have addressed trust in driving automation and training, but few studies have focused on FAD. FAD is an autonomous system that has full control of a vehicle without any need for intervention by the driver. Method: A total of 69 drivers with a valid license practiced with FAD. They were distributed evenly across two conditions: simple practice and elaborate practice. Results: When examining emergency MCR, a correlation was found between trust and reaction time in the simple practice group (i.e., higher trust meant a longer reaction time), but not in the elaborate practice group. This result indicated that to mitigate the negative impact of overtrust on reaction time, more appropriate practice may be needed. Conclusions: Drivers should be trained in how the automated device works so as to improve MCR performance in case of an emergency. Application: The practice format used in this study could be used for the first interaction with an FAD car when acquiring such a vehicle.
Visual-Manual Distraction Detection Using Driving Performance Indicators With Naturalistic Driving Data. This paper investigates the problem of driver distraction detection using driving performance indicators from onboard kinematic measurements. First, naturalistic driving data from the integrated vehicle-based safety system program are processed, and cabin camera data are manually inspected to determine the driver&#39;s state (i.e., distracted or attentive). Second, existing driving performance metrics...
Pre-Training With Asynchronous Supervised Learning For Reinforcement Learning Based Autonomous Driving Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules, so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control problems. However, poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving system. RL training requires extensive training data before the model achieves reasonable performance, making an RL-based model inapplicable in a real-world setting, particularly when data are expensive. We propose an asynchronous supervised learning (ASL) method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world settings. Specifically, prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel, on multiple driving demonstration data sets. After pre-training, the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance limit. The presented pre-training method is evaluated on the race car simulator, TORCS (The Open Racing Car Simulator), to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training stage. In addition, a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle deployment. Simulations results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage.
Explanations and Expectations: Trust Building in Automated Vehicles. Trust is a vital determinant of acceptance of automated vehicles (AVs) and expectations and explanations are often at the heart of any trusting relationship. Once expectations have been violated, explanations are needed to mitigate the damage. This study introduces the importance of timing of explanations in promoting trust in AVs. We present the preliminary results of a within-subjects experimental study involving eight participants exposed to four AV driving conditions (i.e. 32 data points). Preliminary results show a pattern that suggests that explanations provided before the AV takes actions promote more trust than explanations provided afterward.
A comprehensive survey on vehicular Ad Hoc network Vehicular ad hoc networks (VANETs) are classified as an application of mobile ad hoc network (MANET) that has the potential in improving road safety and in providing travellers comfort. Recently VANETs have emerged to turn the attention of researchers in the field of wireless and mobile communications, they differ from MANET by their architecture, challenges, characteristics and applications. In this paper we present aspects related to this field to help researchers and developers to understand and distinguish the main features surrounding VANET in one solid document, without the need to go through other relevant papers and articles starting from VANET architecture and ending up with the most appropriate simulation tools to simulate VANET protocols and applications.
Stabilizing a linear system by switching control with dwell time The use of networks in control systems to connect controllers and sensors/actuators has become common practice in many applications. This new technology has also posed a theoretical control problem of how to use the limited data rate of the network effectively. We consider a system where its sensor and actuator are connected by a finite data rate channel. A design method to stabilize a continuous-time, linear plant using a switching controller is proposed. In particular, to prevent the actuator from fast switching, or chattering, which can not only increase the necessary data rate but also damage the system, we employ a dwell-time switching scheme. It is shown that a systematic partition of the state-space enables us to reduce the complexity of the design problem
Ergonomics of exoskeletons: Objective performance metrics In this paper it is shown how variation of the kinematic structure of an exoskeleton and variation of its fixation strength on the human limb influences objective task performance metrics, such as interface load, tracking error and voluntary range of motion in a signal tracking experiment.
Clickers In The Flipped Classroom: Bring Your Own Device (Byod) To Promote Student Learning Flipped classrooms continue to grow in popularity across all levels of education. Following this pedagogical trend, the present study aimed to enhance the face-to-face instruction in flipped classrooms with the use of clickers. A game-like clicker application was implemented through a bring your own device (BYOD) model to gamify classroom dynamics in the spirit of question-and-answer competitions. A series of flipped learning lessons were created for the study, with clickers integrated into question-and-answer activities associated with each of the lessons as formative assessments to assist students in the learning of English as a foreign language. In this quasi-experimental research, the data were gathered using a summative assessment, a perception survey, and individual interviews. The collected data were then analyzed to compare the students' flipped learning experiences, with or without clicker use. The results indicated that the gamified use of clickers had positive influences on student learning, with regard to their performance, perceptions, and preferences. This study thus suggests that the emerging generation of clicker technology allows for a cost-effective BYOD integration model in flipped classrooms, through which it is possible to seamlessly bridge pre-class and in-class activities and to effectively promote student learning.
AoI-Inspired Collaborative Information Collection for AUV-Assisted Internet of Underwater Things In order to better explore the ocean, autonomous underwater vehicles (AUVs) have been widely applied to facilitate the information collection. However, considering the extremely large-scale deployment of sensor nodes in the Internet of Underwater Things (IoUT), a homogeneous AUV-enabled information collection system cannot support timely and reliable information collection considering the time-var...
1.11
0.11
0.1
0.1
0.1
0.1
0.1
0.1
0.05
0.003333
0
0
0
0
Video Moment Retrieval With Cross-Modal Neural Architecture Search The task of video moment retrieval (VMR) is to retrieve the specific video moment from an untrimmed video, according to a textual query. It is a challenging task that requires effective modeling of complex cross-modal matching relationship. Recent efforts primarily model the cross-modal interactions by hand-crafted network architectures. Despite their effectiveness, they rely heavily on expert experience to select architectures and have numerous hyperparameters that need to be carefully tuned, which significantly limit their applications in real-world scenarios. How to design flexible architectures for modeling cross-modal interactions with less manual effort is crucial for the task of VMR but has received limited attention so far. To address this issue, we present a novel VMR approach that <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">automatically</i> searches for an optimal architecture to learn cross-modal matching relationship. Specifically, we develop a cross-modal architecture searching method. It first searches for repeatable cell network architectures based on a directed acyclic graph, which performs <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">operation sampling</i> over a customized task-specific operation set. Then, we adaptively modulate the edge importance in the graph by a query-aware attention network, which performs <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">edge sampling</i> softly in the searched cell. Different from existing neural architecture search methods, our approach can effectively exploit the query information to reach query-conditioned architectures for modeling cross modal matching. Extensive experiments on three benchmark datasets show that our approach can not only significantly outperform the state-of-the-art methods but also run more efficiently and robustly than manually crafted network architectures.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Problem of dynamic change of tags location in anticollision RFID systems Presently the necessity of building anticollision RFID systems with dynamic location change of tags appear more often. Such solutions are used in identification of moving cars, trains (automatic identification of vehicles – AVI processes) as well as moving parts and elements in industry, commerce, science and medicine (internet of things). In the paper there were presented operation stages in the RFID anticollision system necessary to communicate with groups of tags entering and leaving read/write device interrogation zone and communication phases in conditions of dynamic location change of tags. The mentioned aspects influence RFID system reliability, which is characterized by the efficiency coefficient and the identification probability of objects in specific interrogation zone. The communication conditions of correct operation of multiple RFID system are crucial for efficient exchange of data with all tags during their dynamic location changes. Presented problem will be the base to specify new application tag parameters (such as maximum speed of tag motion) and synthesis of interrogation zone required for concrete anticollision RFID applications with dynamic location change of tags.
Robot Localization via Passive UHF-RFID Technology: State-of-the-Art and Challenges This paper presents a state-of-the-art analysis on the current methods for robot localization based on the passive UHF-RFID technology. The state-of-the-art analysis describes the main features and challenges of several localization methods. Then, a first experimental analysis related to a novel phase-based robot localization method is presented. The robot on-board reader collects phase data from a set of passive reference tags during its motion, so resembling to a synthetic array. Then, the phase data are combined with information acquired by low-cost kinematic sensors, through a Sensor Fusion approach. The experimental results show that centimetre order localization errors can be achieved in a typical office indoor scenario by employing a few reference tags.
An RFID-Based Mobile Robot Localization Method Combining Phase Difference and Readability A novel radio frequency identification (RFID)-based mobile robot global localization method combining two kinds of RFID signal information, i.e., phase difference and readability, is proposed. Specifically, a phase difference model and a classification logic strategy based on readability are built and integrated into a particle filter localization algorithm. Compared with existing RFID localizatio...
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
Optimization Of Radio And Computational Resources For Energy Efficiency In Latency-Constrained Application Offloading Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Noninterference for a Practical DIFC-Based Operating System The Flume system is an implementation of decentralized information flow control (DIFC) at the operating system level. Prior work has shown Flume can be implemented as a practical extension to the Linux operating system, allowing real Web applications to achieve useful security guarantees. However, the question remains if the Flume system is actually secure. This paper compares Flume with other recent DIFC systems like Asbestos, arguing that the latter is inherently susceptible to certain wide-bandwidth covert channels, and proving their absence in Flume by means of a noninterference proof in the communicating sequential processes formalism.
Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-Art In the nearly six decades since researchers began to explore methods of creating them, exoskeletons have progressed from the stuff of science fiction to nearly commercialized products. While there are still many challenges associated with exoskeleton development that have yet to be perfected, the advances in the field have been enormous. In this paper, we review the history and discuss the state-of-the-art of lower limb exoskeletons and active orthoses. We provide a design overview of hardware, actuation, sensory, and control systems for most of the devices that have been described in the literature, and end with a discussion of the major advances that have been made and hurdles yet to be overcome.
Magnetic, Acceleration Fields and Gyroscope Quaternion (MAGYQ)-based attitude estimation with smartphone sensors for indoor pedestrian navigation. The dependence of proposed pedestrian navigation solutions on a dedicated infrastructure is a limiting factor to the deployment of location based services. Consequently self-contained Pedestrian Dead-Reckoning (PDR) approaches are gaining interest for autonomous navigation. Even if the quality of low cost inertial sensors and magnetometers has strongly improved, processing noisy sensor signals combined with high hand dynamics remains a challenge. Estimating accurate attitude angles for achieving long term positioning accuracy is targeted in this work. A new Magnetic, Acceleration fields and GYroscope Quaternion (MAGYQ)-based attitude angles estimation filter is proposed and demonstrated with handheld sensors. It benefits from a gyroscope signal modelling in the quaternion set and two new opportunistic updates: magnetic angular rate update (MARU) and acceleration gradient update (AGU). MAGYQ filter performances are assessed indoors, outdoors, with dynamic and static motion conditions. The heading error, using only the inertial solution, is found to be less than 10 degrees after 1.5 km walking. The performance is also evaluated in the positioning domain with trajectories computed following a PDR strategy.
Robust Sparse Linear Discriminant Analysis Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features. 2) LDA is sensitive to noise. 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the l2;1 norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data. IEEE
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.11
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
0
0
A review of the use of virtual reality head-mounted displays in education and training. In the light of substantial improvements to the quality and availability of virtual reality (VR) hardware seen since 2013, this review seeks to update our knowledge about the use of head-mounted displays (HMDs) in education and training. Following a comprehensive search 21 documents reporting on experimental studies were identified, quality assessed, and analysed. The quality assessment shows that the study quality was below average according to the Medical Education Research Study Quality Instrument, especially for the studies that were designed as user evaluations of educational VR products. The review identified a number of situations where HMDs are useful for skills acquisition. These include cognitive skills related to remembering and understanding spatial and visual information and knowledge; psychomotor skills related to head-movement, such as visual scanning or observational skills; and affective skills related to controlling your emotional response to stressful or difficult situations. Outside of these situations the HMDs had no advantage when compared to less immersive technologies or traditional instruction and in some cases even proved counterproductive because of widespread cybersickness, technological challenges, or because the immersive experience distracted from the learning task.
Exploring video games that invoke curiosity •We present a ranking of video game titles and genres based on players' curiosity.•Genres that balance uncertainty and structure are suited to invoke curiosity.•Most video game suggestions by players were given under the category “Exploration”.•“Social Simulation” games ranked higher than other genres in player curiosity.
Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model. A novel nonlinear system identification scheme is proposed.A Hammerstein model has been trained using cuckoo search algorithm.The model is a cascade of a FLANN and an adaptive IIR filter.Simulation study shows enhanced modeling capacity of the proposed scheme.The new schemes offers lesser computational time over other methods studied. An attempt has been made in this paper to model a nonlinear system using a Hammerstein model. The Hammerstein model considered in this paper is a functional link artificial neural network (FLANN) in cascade with an adaptive infinite impulse response (IIR) filter. In order to avoid local optima issues caused by conventional gradient descent training strategies, the model has been trained using a cuckoo search algorithm (CSA), which is a recently proposed stochastic algorithm. Modeling accuracy of the proposed scheme has been compared with that obtained using other popular evolutionary computing algorithms for the Hammerstein model. Enhanced modeling capability of the CSA based scheme is evident from the simulation results.
A meta-analysis and systematic literature review of virtual reality rehabilitation programs. A recent advancement in the study of physical rehabilitation is the application of virtual reality rehabilitation (VRR) programs, in which patients perform practice behaviors while interacting with the computer-simulation of an environment that imitates a physical presence in real or imagined worlds. Despite enthusiasm, much remains unknown about VRR programs. Particularly, two important research questions have been left unanswered: Are VRR programs effective? And, if so, why are VRR programs effective? A meta-analysis is performed in the current article to determine the efficacy of VRR programs, in general, as well as their ability to develop four specific rehabilitation outcomes: motor control, balance, gait, and strength. A systematic literature review is also performed to determine the mechanisms that may cause VRR program success or failure. The results demonstrate that VRR programs are more effective than traditional rehabilitation programs for physical outcome development. Further, three mechanisms have been proposed to cause these improved outcomes: excitement, physical fidelity, and cognitive fidelity; however, empirical research has yet to show that these mechanisms actually prompt better rehabilitation outcomes. The implications of these results and possible avenues for future research and practice are discussed. Virtual reality rehabilitation (VRR) programs are growing in popularity.VRR programs are more effective than traditional rehabilitation programs.Excitement, physical fidelity, and cognitive fidelity may cause VRR program success.More research is needed to better understand VRR programs.
Training in VR: A Preliminary Study on Learning Assembly/Disassembly Sequences. This paper presents our ongoing work for operators training exploiting an immersive Mixed Reality system. Users, immersed in a Virtual Environment, can be trained in assembling or disassembling complex mechanical machineries. Taking input from current industry-level procedures, the training consists in guided step-by-step operations in order to teach the operators how to assemble, disassemble and maintain a certain machine. In our system the interaction is performed in a natural way: the user can see his own real hands, by means of a 3D camera placed on the HMD, and use them to grab and move the machine pieces in order to perform the training task. We believe that seeing your own hands during manipulative tasks present fundamental advantages over mediated techniques. In this paper we describe the system architecture and present our strategy as well as the results of a pilot test aiming at a preliminary evaluation of the system.
Verification of Information Flow and Access Control Policies with Dependent Types We present Relational Hoare Type Theory (RHTT), a novel language and verification system capable of expressing and verifying rich information flow and access control policies via dependent types. We show that a number of security policies which have been formalized separately in the literature can all be expressed in RHTT using only standard type-theoretic constructions such as monads, higher-order functions, abstract types, abstract predicates, and modules. Example security policies include conditional declassification, information erasure, and state-dependent information flow and access control. RHTT can reason about such policies in the presence of dynamic memory allocation, deallocation, pointer aliasing and arithmetic. The system, theorems and examples have all been formalized in Coq.
A lattice model of secure information flow This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.
Tabu Search - Part I
Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider a MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources􀀀the transmit precoding matrices of the MUs􀀀and the computational resources􀀀the CPU cycles/second assigned by the cloud to each MU􀀀in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.
Space-time modeling of traffic flow. This paper discusses the application of space-time autoregressive integrated moving average (STARIMA) methodology for representing traffic flow patterns. Traffic flow data are in the form of spatial time series and are collected at specific locations at constant intervals of time. Important spatial characteristics of the space-time process are incorporated in the STARIMA model through the use of weighting matrices estimated on the basis of the distances among the various locations where data are collected. These matrices distinguish the space-time approach from the vector autoregressive moving average (VARMA) methodology and enable the model builders to control the number of the parameters that have to be estimated. The proposed models can be used for short-term forecasting of space-time stationary traffic-flow processes and for assessing the impact of traffic-flow changes on other parts of the network. The three-stage iterative space-time model building procedure is illustrated using 7.5min average traffic flow data for a set of 25 loop-detectors located at roads that direct to the centre of the city of Athens, Greece. Data for two months with different traffic-flow characteristics are modelled in order to determine the stability of the parameter estimation.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Neural network adaptive tracking control for a class of uncertain switched nonlinear systems. •Study the method of the tracking control of the switched uncertain nonlinear systems under arbitrary switching signal controller.•A multilayer neural network adaptive controller with multilayer weight norm adaptive estimation is been designed.•The adaptive law is expand from calculation the second layer weight of neural network to both of the two layers weight.•The controller proposed improve the tracking error performance of the closed-loop system greatly.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.01449
0.014286
0.01
0.008571
0.007143
0.002857
0.000286
0
0
0
0
0
0
0
Triplet-Based Deep Hashing Network for Cross-Modal Retrieval. Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity ...
Unsupervised Semantic-Preserving Adversarial Hashing for Image Search. Hashing plays a pivotal role in nearest-neighbor searching for large-scale image retrieval. Recently, deep learning-based hashing methods have achieved promising performance. However, most of these deep methods involve discriminative models, which require large-scale, labeled training datasets, thus hindering their real-world applications. In this paper, we propose a novel strategy to exploit the ...
Space-time super-resolution. We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Combining Markov Random Fields And Convolutional Neural Networks For Image Synthesis This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods.
Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN Recently, with the revolutionary neural style transferring methods, creditable paintings can be synthesized automatically from content images and style images. However, when it comes to the task of applying a painting's style to an anime sketch, these methods will just randomly colorize sketch lines as outputs and fail in the main task: specific style transfer. In this paper, we integrated residual U-net to apply the style to the gray-scale sketch with auxiliary classifier generative adversarial network (AC-GAN). The whole process is automatic and fast. Generated results are creditable in the quality of art style as well as colorization.
DRIT++: Diverse Image-to-Image Translation via Disentangled Representations Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for this task: (1) lack of aligned training pairs and (2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for generating diverse outputs without paired training images. To synthesize diverse outputs, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and attribute vectors sampled from the attribute space to synthesize diverse outputs at test time. To handle unpaired training data, we introduce a cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative evaluations, we measure realism with user study and Fréchet inception distance, and measure diversity with the perceptual distance metric, Jensen–Shannon divergence, and number of statistically-different bins.
NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study This paper introduces a novel large dataset for video de blurring, video super-resolution and studies the state-of-the-art as emerged from the NTIRE 2019 video restoration challenges. The video deblurring and video super-resolution challenges are each the first challenge of its kind, with 4 competitions, hundreds of participants and tens of proposed solutions. Our newly collected REalistic and Diverse Scenes dataset (REDS) was employed by the challenges. In our study, we compare the solutions from the challenges to a set of representative methods from the literature and evaluate them on our proposed REDS dataset. We find that the NTIRE 2019 challenges push the state-of-theart in video deblurring and super-resolution, reaching compelling performance on our newly proposed REDS dataset.
Hierarchical Cross-Modal Talking Face Generation With Dynamic Pixel-Wise Loss We devise a cascade GAN approach to generate talking face video, which is robust to different face shapes, view angles, facial characteristics,and noisy audio conditions. Instead of learning a direct mapping from audio to video frames, we propose first to transfer audio to high-level structure, i.e., the facial landmarks, and then to generate video frames conditioned on the landmarks. Compared to a direct audio-to-image approach, our cascade approach avoids fitting spurious correlations between audiovisual signals that are irrelevant to the speech content. We, humans, are sensitive to temporal discontinuities and subtle artifacts in video. To avoid those pixel jittering problems and to enforce the network to focus on audiovisual-correlated regions, we propose a novel dynamically adjustable pixel-wise loss with an attention mechanism. Furthermore, to generate a sharper image with well-synchronized facial movements, we propose a novel regression-based discriminator structure, which considers sequence-level information along with frame-level information. Thoughtful experiments on several datasets and real-world samples demonstrate significantly better results obtained by our method than the state-of-the-art methods in both quantitative and qualitative comparisons.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator This paper discusses many of the key implementationdecisions, including the choice of triangulation algorithmsand data structures, the steps taken to create and refine amesh, a number of issues that arise in Ruppert's algorithm,and the use of exact arithmetic.
Dynamic priority protocols for packet voice Since the reconstruction of continuous speech from voice packets is complicated by the variable delays of the packets through the network, a dynamic priority protocol is proposed to minimize the variability of packet delays. The protocol allows the priority of a packet to vary with time. After a discussion of the concept of dynamic priorities, two examples of dynamic priorities are studied through queueing analysis and simulations. Optimal properties of the oldest customer first (OCF) and earliest deadline first (EDF) disciplines are proven, suggesting that they may be theoretically effective in reducing the variability of packet delays. Simulation results of the OCF discipline indicate that the OCF discipline is most effective under conditions of long routes and heavy traffic, i.e., the conditions when delay variability is most likely to be significant. Under OCF, the delays of packets along long routes are improved at the expense of packets along short routes. It is noted that more complex and realistic simulations, including simulations of the EDF discipline, are needed
Adaptive Optimal Control of Unknown Constrained-Input Systems Using Policy Iteration and Neural Networks This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.
Real-Time Video Analytics: The Killer App for Edge Computing. Video analytics will drive a wide range of applications with great potential to impact society. A geographically distributed architecture of public clouds and edges that extend down to the cameras is the only feasible approach to meeting the strict real-time requirements of large-scale live video analytics.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.078
0.076667
0.066667
0.066667
0.066667
0.066667
0.066667
0.043333
0.000323
0
0
0
0
0
Pricing and Routing Mechanisms for Differentiated Services in an Electric Vehicle Public Charging Station Network We consider a Charging Network Operator (CNO) that owns a network of Electric Vehicle (EV) public charging stations and wishes to offer a menu of differentiated service options for access to its stations. This involves designing optimal pricing and routing schemes for the setting where users cannot directly choose which station they use. Instead, they choose their priority level and energy request amount from the differentiated service menu, and then the CNO directly assigns them to a station on their path. This allows higher priority users to experience lower wait times at stations, and allows the CNO to directly manage demand, exerting a higher level of control that can be used to manage the effect of EV on the grid and control station wait times. We consider the scenarios where the CNO is a social welfare-maximizing or a profit-maximizing entity, and in both cases, design pricing-routing policies that ensure users reveal their true parameters to the CNO.
Optimal Pricing to Manage Electric Vehicles in Coupled Power and Transportation Networks. We study the system-level effects of the introduction of large populations of Electric Vehicles (EVs) on the power and transportation networks. We assume that each EV owner solves a decision problem to pick a cost-minimizing charge and travel plan. This individual decision takes into account traffic congestion in the transportation network, affecting travel times, as well as congestion in the powe...
Computational difficulties of bilevel linear programming We show, using small examples, that two algorithms previously published for the Bilevel Linear Programming problem BLP may fail to find the optimal solution and thus must be considered to be heuris...
Electric Vehicle Charging Stations With Renewable Power Generators: A Game Theoretical Analysis In this paper, we study the price competition among electric vehicle charging stations (EVCSs) with renewable power generators (RPGs). As electric vehicles (EVs) become more popular, a competition among EVCSs to attract EVs is inevitable. Thereby, each EVCS sets its electricity price to maximize its revenue by taking into account the competition with neighboring EVCSs. We analyze the competitive interactions between EVCSs using game theory, where relevant physical constraints such as the transmission line capacity, the distance between EV and EVCS, and the number of charging outlets at the EVCSs are taken into account. We show that the game played by EVCSs is a supermodular game and there exists a unique pure Nash equilibrium for best response algorithms with arbitrary initial policy. The electricity price and the revenue of EVCSs are evaluated via simulations, which reveal the benefits of having RPGs at the EVCSs.
Structure Learning in Power Distribution Networks. Traditional power distribution networks suffer from a lack of real-time observability. This complicates development and implementation of new smart-grid technologies, such as those related to demand response, outage detection and management, and improved load monitoring. In this paper, inspired by proliferation of metering technology, we discuss topology estimation problems in structurally loopy b...
Coordinated Planning of Extreme Fast Charging Stations and Power Distribution Networks Considering On-Site Storage The extreme fast charging (XFC) technology helps to reduce refueling time, alleviate mile anxiety, extend driving range and finally promote the popularity of electric vehicles (EVs). However, it would also pose great challenges on the power grid infrastructure especially distribution networks, due to the large-scale and intermittent power demand. This paper proposes a coordinated planning method for power distribution networks and XFC EV charging stations, with the on-site batteries considered. Firstly, considering the traffic flow pattern, the operation of XFC stations is analyzed on both energy and power demand. Secondly, the coordinated planning model is developed to satisfy the time-varying XFC load, with both transportation and electricity constraints considered. In addition, the on-site batteries are introduced to flatten the XFC energy used and supplement its power supply. The case studies have verified the effectiveness of the proposed method. The influence of XFC on the distribution networks and the effects of the on-site storage are also studied.
Competitive on-line scheduling with level of service Motivated by an application in thinwire visualization, we study an abstract on-line scheduling problem where the size of each requested service can be scaled down by the scheduler. Thus, our problem embodies a notion of "Level of Service" that is increasingly important in multimedia applications. We give two schedulers FirstFit and EndFit based on two simple heuristics, and generalize them into a class of greedy schedulers. We show that both FirstFit and EndFit are 2-competitive, and any greedy scheduler is 3-competitive. These bounds are shown to be tight.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
GSA: A Gravitational Search Algorithm In recent years, various heuristic optimization methods have been developed. Many of these methods are inspired by swarm behaviors in nature. In this paper, a new optimization algorithm based on the law of gravity and mass interactions is introduced. In the proposed algorithm, the searcher agents are a collection of masses which interact with each other based on the Newtonian gravity and the laws of motion. The proposed method has been compared with some well-known heuristic search methods. The obtained results confirm the high performance of the proposed method in solving various nonlinear functions.
Adaptive Learning in Tracking Control Based on the Dual Critic Network Design. In this paper, we present a new adaptive dynamic programming approach by integrating a reference network that provides an internal goal representation to help the systems learning and optimization. Specifically, we build the reference network on top of the critic network to form a dual critic network design that contains the detailed internal goal representation to help approximate the value funct...
A novel data hiding for color images based on pixel value difference and modulus function This paper proposes a novel data hiding method using pixel-value difference and modulus function for color image with the large embedding capacity(hiding 810757 bits in a 512 512 host image at least) and a high-visual-quality of the cover image. The proposed method has fully taken into account the correlation of the R, G and B plane of a color image. The amount of information embedded the R plane and the B plane determined by the difference of the corresponding pixel value between the G plane and the median of G pixel value in each pixel block. Furthermore, two sophisticated pixel value adjustment processes are provided to maintain the division consistency and to solve underflow and overflow problems. The most importance is that the secret data are completely extracted through the mathematical theoretical proof.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) fool pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.
Myoelectric or Force Control? A Comparative Study on a Soft Arm Exosuit The intention-detection strategy used to drive an exosuit is fundamental to evaluate the effectiveness and acceptability of the device. Yet, current literature on wearable soft robotics lacks evidence on the comparative performance of different control approaches for online intention-detection. In the present work, we compare two different and complementary controllers on a wearable robotic suit, previously formulated and tested by our group; a model-based myoelectric control ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> ), which estimates the joint torque from the activation of target muscles, and a force control that estimates human torques using an inverse dynamics model ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> ). We test them on a cohort of healthy participants performing tasks replicating functional activities of daily living involving a wide range of dynamic movements. Our results suggest that both controllers are robust and effective in detecting human–motor interaction, and show comparable performance for augmenting muscular activity. In particular, the biceps brachii activity was reduced by up to 74% under the assistance of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> and up to 47% under the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to a no-suit condition. However, the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> outperformed the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> in promptness and assistance during movements that involve high dynamics. The exosuit work normalized with respect to the overall work was <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$68.84 \pm 3.81\%$</tex-math></inline-formula> when it was ran by the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$45.29 \pm 7.71\%$</tex-math></inline-formula> during the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> condition. The reliability and accuracy of motor intention detection strategies in wearable device is paramount for both the efficacy and acceptability of this technology. In this article, we offer a detailed analysis of the two most widely used control approaches, trying to highlight their intrinsic structural differences and to discuss their different and complementary performance.
1.11
0.11
0.1
0.1
0.1
0.1
0.02
0
0
0
0
0
0
0
Adaptive Consensus-Based Distributed Target Tracking With Dynamic Cluster in Sensor Networks. This paper is concerned with the target tracking problem over a filtering network with dynamic cluster and data fusion. A novel distributed consensus-based adaptive Kalman estimation is developed to track a linear moving target. Both optimal filtering gain and average disagreement of the estimates are considered in the filter design. In order to estimate the states of the target more precisely, an optimal Kalman gain is obtained by minimizing the mean-squared estimation error. An adaptive consensus factor is employed to adjust the optimal gain as well as to acquire a better filtering performance. In the filter's information exchange, dynamic cluster selection and two-stage hierarchical fusion structure are employed to get more accurate estimation. At the first stage, every sensor collects information from its neighbors and runs the Kalman estimation algorithm to obtain a local estimate of system states. At the second stage, each local sensor sends its estimate to the cluster head to get a fused estimation. Finally, an illustrative example is presented to validate the effectiveness of the proposed scheme.
Survey of NLOS identification and error mitigation problems in UWB-based positioning algorithms for dense environments In this survey, the currently available ultra-wideband-based non-line-of-sight (NLOS) identification and error mitigation methods are presented. They are classified into several categories and their comparison is presented in two tables: one each for NLOS identification and error mitigation. NLOS identification methods are classified based on range estimates, channel statistics, and the actual maps of the building and environment. NLOS error mitigation methods are categorized based on direct path and statistics-based detection.
Reliable Classification of Vehicle Types Based on Cascade Classifier Ensembles Vehicle-type recognition based on images is a challenging task. This paper comparatively studied two feature extraction methods for image description, i.e., the Gabor wavelet transform and the Pyramid Histogram of Oriented Gradients (PHOG). The Gabor transform has been widely adopted to extract image features for various vision tasks. PHOG has the superiority in its description of more discriminating information. A highly reliable classification scheme was proposed by cascade classifier ensembles with reject option to accommodate the situations where no decision should be made if there exists adequate ambiguity. The first ensemble is heterogeneous, consisting of several classifiers, including $k$-nearest neighbors (kNNs), multiple-layer perceptrons (MLPs), support vector machines (SVMs), and random forest. The classification reliability is further enhanced by a second classifier ensemble, which is composed of a set of base MLPs coordinated by an ensemble metalearning method called rotation forest (RF). For both of the ensembles, rejection option is accomplished by relating the consensus degree from majority voting to a confidence measure and by abstaining to classify ambiguous samples if the consensus degree is lower than a threshold. The final class label is assigned by dual majority voting from the two ensembles. Experimental results using more than 600 images from a variety of 21 makes of cars and vans demonstrated the effectiveness of the proposed approach. The cascade ensembles produce consistently reliable results. With a moderate ensemble size of 25 in the second ensemble, the two-stage classification scheme offers 98.65% accuracy with a rejection rate of 2.5%, exhibiting promising potential for real-world applications.
A fusion strategy for reliable vehicle positioning utilizing RFID and in-vehicle sensors. RFID is introduced as a virtual sensor for vehicle positioning.LSSVM algorithm is proposed to obtain the distance between RFID tags and reader.In-vehicle sensors are employed to fuse with RFID to achieve vehicle positioning.An LSSVM-MM (multiple models) filter is proposed to realize the global fusion. In recent years, RFID has become a viable solution to provide object's location information. However, the RFID-based positioning algorithms in the literature have disadvantages such as low accuracy, low output frequency and the lack of speed or attitude information. To overcome these problems, this paper proposes a RFID/in-vehicle sensors fusion strategy for vehicle positioning in completely GPS-denied environments such as tunnels. The low-cost in-vehicle sensors including electronic compass and wheel speed sensors are introduced to be fused with RFID. The strategy adopts a two-step approach, i.e., the calculation of the distances between the RFID tags and the reader, and then the global fusion estimation of vehicle position. First, a Least Square Support Vector Machine (LSSVM) algorithm is developed to obtain the distances. Further, a novel LSSVM Multiple Model (LMM) algorithm is designed to fuse the data obtained from RFID and in-vehicle sensors. Contrarily to other multiple model algorithms, the LMM is more suitable for current driving conditions because the model probabilities can be calculated according to the operating state of the vehicle by using the LSSVM decision model. Finally, the proposed strategy is evaluated through experiments. The results validate the feasibility and effectiveness of the proposed strategy. This paper proposes a RFID/in-vehicle sensors fusion strategy for vehicle positioning in completely GPS-denied environments such as tunnels. The low-cost in-vehicle sensors including electronic compass and wheel speed sensors are introduced to be fused with RFID. The strategy adopts a two-step approach, i.e., the calculation of the distances between the RFID tags and the reader, and then the global fusion estimation of vehicle position. First, a least square support vector machine (LSSVM) algorithm is developed to obtain the distance. Further, a novel LSSVM multiple model (LMM) algorithm is designed to fuse the data obtained from RFID and in-vehicle sensors. Contrarily to other multiple models algorithms, LMM is more suitable for current driving conditions because the model probabilities can be calculated according to the operating state of the vehicle by using the LSSVM decision model. Finally, the proposed strategy is evaluated through experiments. The results validate the feasibility and effectiveness of the proposed strategy.Display Omitted
Novel EKF-Based Vision/Inertial System Integration for Improved Navigation. With advances in computing power, stereo vision has become an essential part of navigation applications. However, there may be instances wherein insufficient image data precludes the estimation of navigation parameters. Earlier, a novel vision-based velocity estimation method was developed by the authors, which suffered from the aforementioned drawback. In this paper, the vision-based navigation m...
Relative Position Estimation Between Two UWB Devices With IMUs For a team of robots to work collaboratively, it is crucial that each robot have the ability to determine the position of their neighbors, relative to themselves, in order to execute tasks autonomously. This letter presents an algorithm for determining the three-dimensional relative position between two mobile robots, each using nothing more than a single ultra-wideband transceiver, an acceleromet...
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
Very Deep Convolutional Networks for Large-Scale Image Recognition. In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
Chimp optimization algorithm. •A novel optimizer called Chimp Optimization Algorithm (ChOA) is proposed.•ChOA is inspired by individual intelligence and sexual motivation of chimps.•ChOA alleviates the problems of slow convergence rate and trapping in local optima.•The four main steps of Chimp hunting are implemented.
Space-time modeling of traffic flow. This paper discusses the application of space-time autoregressive integrated moving average (STARIMA) methodology for representing traffic flow patterns. Traffic flow data are in the form of spatial time series and are collected at specific locations at constant intervals of time. Important spatial characteristics of the space-time process are incorporated in the STARIMA model through the use of weighting matrices estimated on the basis of the distances among the various locations where data are collected. These matrices distinguish the space-time approach from the vector autoregressive moving average (VARMA) methodology and enable the model builders to control the number of the parameters that have to be estimated. The proposed models can be used for short-term forecasting of space-time stationary traffic-flow processes and for assessing the impact of traffic-flow changes on other parts of the network. The three-stage iterative space-time model building procedure is illustrated using 7.5min average traffic flow data for a set of 25 loop-detectors located at roads that direct to the centre of the city of Athens, Greece. Data for two months with different traffic-flow characteristics are modelled in order to determine the stability of the parameter estimation.
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
Understanding Taxi Service Strategies From Taxi GPS Traces Taxi service strategies, as the crowd intelligence of massive taxi drivers, are hidden in their historical time-stamped GPS traces. Mining GPS traces to understand the service strategies of skilled taxi drivers can benefit the drivers themselves, passengers, and city planners in a number of ways. This paper intends to uncover the efficient and inefficient taxi service strategies based on a large-scale GPS historical database of approximately 7600 taxis over one year in a city in China. First, we separate the GPS traces of individual taxi drivers and link them with the revenue generated. Second, we investigate the taxi service strategies from three perspectives, namely, passenger-searching strategies, passenger-delivery strategies, and service-region preference. Finally, we represent the taxi service strategies with a feature matrix and evaluate the correlation between service strategies and revenue, informing which strategies are efficient or inefficient. We predict the revenue of taxi drivers based on their strategies and achieve a prediction residual as less as 2.35 RMB/h,1 which demonstrates that the extracted taxi service strategies with our proposed approach well characterize the driving behavior and performance of taxi drivers.
Finite-Time Adaptive Fuzzy Tracking Control Design for Nonlinear Systems. This paper addresses the finite-time tracking problem of nonlinear pure-feedback systems. Unlike the literature on traditional finite-time stabilization, in this paper the nonlinear system functions, including the bounding functions, are all totally unknown. Fuzzy logic systems are used to model those unknown functions. To present a finite-time control strategy, a criterion of semiglobal practical...
Myoelectric or Force Control? A Comparative Study on a Soft Arm Exosuit The intention-detection strategy used to drive an exosuit is fundamental to evaluate the effectiveness and acceptability of the device. Yet, current literature on wearable soft robotics lacks evidence on the comparative performance of different control approaches for online intention-detection. In the present work, we compare two different and complementary controllers on a wearable robotic suit, previously formulated and tested by our group; a model-based myoelectric control ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> ), which estimates the joint torque from the activation of target muscles, and a force control that estimates human torques using an inverse dynamics model ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> ). We test them on a cohort of healthy participants performing tasks replicating functional activities of daily living involving a wide range of dynamic movements. Our results suggest that both controllers are robust and effective in detecting human–motor interaction, and show comparable performance for augmenting muscular activity. In particular, the biceps brachii activity was reduced by up to 74% under the assistance of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> and up to 47% under the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to a no-suit condition. However, the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> outperformed the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> in promptness and assistance during movements that involve high dynamics. The exosuit work normalized with respect to the overall work was <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$68.84 \pm 3.81\%$</tex-math></inline-formula> when it was ran by the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$45.29 \pm 7.71\%$</tex-math></inline-formula> during the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> condition. The reliability and accuracy of motor intention detection strategies in wearable device is paramount for both the efficacy and acceptability of this technology. In this article, we offer a detailed analysis of the two most widely used control approaches, trying to highlight their intrinsic structural differences and to discuss their different and complementary performance.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Temporal Multi-Graph Convolutional Network for Traffic Flow Prediction Traffic flow prediction plays an important role in ITS (Intelligent Transportation System). This task is challenging due to the complex spatial and temporal correlations (e.g., the constraints of road network and the law of dynamic change with time). Existing work tried to solve this problem by exploiting a variety of spatiotemporal models. However, we observe that more semantic pair-wise correlat...
Introduction to the special section on intelligent systems for socially aware computing
Traffic-incident detection-algorithm based on nonparametric regression This paper proposes an improved nonparametric regression (INPR) algorithm for forecasting traffic flows and its application in automatic detection of traffic incidents. The INPRA is constructed based on the searching method of nearest neighbors for a traffic-state vector and its main advantage lies in forecasting through possible trends of traffic flows, instead of just current traffic states, as commonly used in previous forecasting algorithms. Various simulation results have indicated the viability and effectiveness of the proposed new algorithm. Several performance tests have been conducted using actual traffic data sets and results demonstrate that INPRs average absolute forecast errors, average relative forecast errors, and average computing times are the smallest comparing with other forecasting algorithms.
Mining Road Network Correlation for Traffic Estimation via Compressive Sensing. This paper presents a transport traffic estimation method which leverages road network correlation and sparse traffic sampling via the compressive sensing technique. Through the investigation on a traffic data set of more than 4400 taxis from Shanghai city, China, we observe nontrivial traffic correlations among the traffic conditions of different road segments and derive a mathematical model to c...
An Improved Bayesian Combination Model for Short-Term Traffic Prediction With Deep Learning Short-term traffic volume prediction, which can assist road users in choosing appropriate routes and reducing travel time cost, is a significant topic of intelligent transportation system. To overcome the error magnification phenomena of traditional combination methods and to improve prediction performance, this paper proposes an improved Bayesian combination model with deep learning (IBCM-DL) for traffic flow prediction. First, an IBCM framework is established based on the new BCM framework proposed by Wang. Then, correlation analysis is used to analyze the relevance between the historical traffic flow and the traffic flow within the current interval. Three sub-predictors including the gated recurrent unit neural network (GRUNN), autoregressive integrated moving average (ARIMA), and radial basis function neural network (RBFNN) are incorporated into the IBCM framework to take advantage of each method. The real-world traffic volume data captured by microwave sensors located on the expressways of Beijing was used to validate the proposed model in multiple scenarios. The overall results illustrate that the IBCM-DL model outperforms the other state-of-the-art methods in terms of accuracy and stability.
Deep Learning Architecture for Short-Term Passenger Flow Forecasting in Urban Rail Transit Short-term passenger flow forecasting is an essential component in urban rail transit operation. Emerging deep learning models provide good insight into improving prediction precision. Therefore, we propose a deep learning architecture combining the residual network (ResNet), graph convolutional network (GCN), and long short-term memory (LSTM) (called “ResLSTM”) to forecast short-term passenger fl...
Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction tasks and often neglect spatial and temporal dependencies. In this paper, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Networks (STGCN), to tackle the time series prediction problem in traffic domain. Instead of applying regular convolutional and recurrent units, we formulate the problem on graphs and build the model with complete convolutional structures, which enable much faster training speed with fewer parameters. Experiments show that our model STGCN effectively captures comprehensive spatio-temporal correlations through modeling multi-scale traffic networks and consistently outperforms state-of-the-art baselines on various real-world traffic datasets.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation There is an increasing interest in a fast-growing machine learning technique called Federated Learning (FL), in which the model training is distributed over mobile user equipment (UEs), exploiting UEs' local computation and training data. Despite its advantages such as preserving data privacy, FL still has challenges of heterogeneity across UEs' data and physical resources. To address these challenges, we first propose FEDL, a FL algorithm which can handle heterogeneous UE data without further assumptions except strongly convex and smooth loss functions. We provide a convergence rate characterizing the trade-off between local computation rounds of each UE to update its local model and global communication rounds to update the FL global model. We then employ FEDL in wireless networks as a resource allocation optimization problem that captures the trade-off between FEDL convergence wall clock time and energy consumption of UEs with heterogeneous computing and power resources. Even though the wireless resource allocation problem of FEDL is non-convex, we exploit this problem's structure to decompose it into three sub-problems and analyze their closed-form solutions as well as insights into problem design. Finally, we empirically evaluate the convergence of FEDL with PyTorch experiments, and provide extensive numerical results for the wireless resource allocation sub-problems. Experimental results show that FEDL outperforms the vanilla FedAvg algorithm in terms of convergence rate and test accuracy in various settings.
Efficient algorithms for Web services selection with end-to-end QoS constraints Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.
Development of a UAV-LiDAR System with Application to Forest Inventory We present the development of a low-cost Unmanned Aerial Vehicle-Light Detecting and Ranging (UAV-LiDAR) system and an accompanying workflow to produce 3D point clouds. UAV systems provide an unrivalled combination of high temporal and spatial resolution datasets. The TerraLuma UAV-LiDAR system has been developed to take advantage of these properties and in doing so overcome some of the current limitations of the use of this technology within the forestry industry. A modified processing workflow including a novel trajectory determination algorithm fusing observations from a GPS receiver, an Inertial Measurement Unit (IMU) and a High Definition (HD) video camera is presented. The advantages of this workflow are demonstrated using a rigorous assessment of the spatial accuracy of the final point clouds. It is shown that due to the inclusion of video the horizontal accuracy of the final point cloud improves from 0.61 m to 0.34 m (RMS error assessed against ground control). The effect of the very high density point clouds (up to 62 points per m(2)) produced by the UAV-LiDAR system on the measurement of tree location, height and crown width are also assessed by performing repeat surveys over individual isolated trees. The standard deviation of tree height is shown to reduce from 0.26 m, when using data with a density of 8 points per m(2), to 0.15 m when the higher density data was used. Improvements in the uncertainty of the measurement of tree location, 0.80 m to 0.53 m, and crown width, 0.69 m to 0.61 m are also shown.
A review on interval type-2 fuzzy logic applications in intelligent control. A review of the applications of interval type-2 fuzzy logic in intelligent control has been considered in this paper. The fundamental focus of the paper is based on the basic reasons for using type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques.
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at this http URL
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.04
0.04
0.04
0.04
0.04
0.026667
0.007059
0
0
0
0
0
0
0
Inferring Latent Traffic Demand Offered To An Overloaded Link With Modeling Qos-Degradation Effect In this paper, we propose a CTRIL (Common Trend and Regression with Independent Loss) model to infer latent traffic demand in overloaded links as well as how much it is reduced due to QoS (Quality of Service) degradation. To appropriately provision link bandwidth for such overloaded links, we need to infer how much traffic will increase without QoS degradation. Because original latent traffic demand cannot be observed, we propose a method that compares the other traffic time series of an underloaded link, and by assuming that the latent traffic demands in both overloaded and underloaded are common, and actualized traffic demand in the overloaded link is decreased from common pattern due to the effect of QoS degradation. To realize the method, we developed a CTRIL model on the basis of a state-space model where observed traffic is generated from a latent trend but is decreased by the QoS degradation. By applying the CTRIL model to actual HTTP (Hypertext transfer protocol) traffic and QoS time series data, we reveal that 1% packet loss decreases traffic demand by 12.3%, and the estimated latent traffic demand is larger than the observed one by 23.0%.
Forecasting holiday daily tourist flow based on seasonal support vector regression with adaptive genetic algorithm. •The model of support vector regression with adaptive genetic algorithm and the seasonal mechanism is proposed.•Parameters selection and seasonal adjustment should be carefully selected.•We focus on latest and representative holiday daily data in China.•Two experiments are used to prove the effect of the model.•The AGASSVR is superior to AGA-SVR and BPNN.
Regression conformal prediction with random forests Regression conformal prediction produces prediction intervals that are valid, i.e., the probability of excluding the correct target value is bounded by a predefined confidence level. The most important criterion when comparing conformal regressors is efficiency; the prediction intervals should be as tight (informative) as possible. In this study, the use of random forests as the underlying model for regression conformal prediction is investigated and compared to existing state-of-the-art techniques, which are based on neural networks and k-nearest neighbors. In addition to their robust predictive performance, random forests allow for determining the size of the prediction intervals by using out-of-bag estimates instead of requiring a separate calibration set. An extensive empirical investigation, using 33 publicly available data sets, was undertaken to compare the use of random forests to existing state-of-the-art conformal predictors. The results show that the suggested approach, on almost all confidence levels and using both standard and normalized nonconformity functions, produced significantly more efficient conformal predictors than the existing alternatives.
Learning to Predict Bus Arrival Time From Heterogeneous Measurements via Recurrent Neural Network Bus arrival time prediction intends to improve the level of the services provided by transportation agencies. Intuitively, many stochastic factors affect the predictability of the arrival time, e.g., weather and local events. Moreover, the arrival time prediction for a current station is closely correlated with that of multiple passed stations. Motivated by the observations above, this paper propo...
Hybrid Spatio-Temporal Graph Convolutional Network: Improving Traffic Prediction with Navigation Data Traffic forecasting has recently attracted increasing interest due to the popularity of online navigation services, ridesharing and smart city projects. Owing to the non-stationary nature of road traffic, forecasting accuracy is fundamentally limited by the lack of contextual information. To address this issue, we propose the Hybrid Spatio-Temporal Graph Convolutional Network (H-STGCN), which is able to "deduce" future travel time by exploiting the data of upcoming traffic volume. Specifically, we propose an algorithm to acquire the upcoming traffic volume from an online navigation engine. Taking advantage of the piecewise-linear flow-density relationship, a novel transformer structure converts the upcoming volume into its equivalent in travel time. We combine this signal with the commonly-utilized travel-time signal, and then apply graph convolution to capture the spatial dependency. Particularly, we construct a compound adjacency matrix which reflects the innate traffic proximity. We conduct extensive experiments on real-world datasets. The results show that H-STGCN remarkably outperforms state-of-the-art methods in various metrics, especially for the prediction of non-recurring congestion.
Long-Term Traffic Speed Prediction Based on Multiscale Spatio-Temporal Feature Learning Network Speed plays a significant role in evaluating the evolution of traffic status, and predicting speed is one of the fundamental tasks for the intelligent transportation system. There exists a large number of works on speed forecast; however, the problem of long-term prediction for the next day is still not well addressed. In this paper, we propose a multiscale spatio-temporal feature learning network (MSTFLN) as the model to handle the challenging task of long-term traffic speed prediction for elevated highways. Raw traffic speed data collected from loop detectors every 5 min are transformed into spatial-temporal matrices; each matrix represents the one-day speed information, rows of the matrix indicate the numbers of loop detectors, and time intervals are denoted by columns. To predict the traffic speed of a certain day, nine speed matrices of three historical days with three different time scales are served as the input of MSTFLN. The proposed MSTFLN model consists of convolutional long short-term memories and convolutional neural networks. Experiments are evaluated using the data of three main elevated highways in Shanghai, China. The presented results demonstrate that our approach outperforms the state-of-the-art work and it can effectively predict the long-term speed information.
Transfer Knowledge between Cities The rapid urbanization has motivated extensive research on urban computing. It is critical for urban computing tasks to unlock the power of the diversity of data modalities generated by different sources in urban spaces, such as vehicles and humans. However, we are more likely to encounter the label scarcity problem and the data insufficiency problem when solving an urban computing task in a city where services and infrastructures are not ready or just built. In this paper, we propose a FLexible multimOdal tRAnsfer Learning (FLORAL) method to transfer knowledge from a city where there exist sufficient multimodal data and labels, to this kind of cities to fully alleviate the two problems. FLORAL learns semantically related dictionaries for multiple modalities from a source domain, and simultaneously transfers the dictionaries and labelled instances from the source into a target domain. We evaluate the proposed method with a case study of air quality prediction.
Space-time modeling of traffic flow. This paper discusses the application of space-time autoregressive integrated moving average (STARIMA) methodology for representing traffic flow patterns. Traffic flow data are in the form of spatial time series and are collected at specific locations at constant intervals of time. Important spatial characteristics of the space-time process are incorporated in the STARIMA model through the use of weighting matrices estimated on the basis of the distances among the various locations where data are collected. These matrices distinguish the space-time approach from the vector autoregressive moving average (VARMA) methodology and enable the model builders to control the number of the parameters that have to be estimated. The proposed models can be used for short-term forecasting of space-time stationary traffic-flow processes and for assessing the impact of traffic-flow changes on other parts of the network. The three-stage iterative space-time model building procedure is illustrated using 7.5min average traffic flow data for a set of 25 loop-detectors located at roads that direct to the centre of the city of Athens, Greece. Data for two months with different traffic-flow characteristics are modelled in order to determine the stability of the parameter estimation.
Model-based periodic event-triggered control for linear systems Periodic event-triggered control (PETC) is a control strategy that combines ideas from conventional periodic sampled-data control and event-triggered control. By communicating periodically sampled sensor and controller data only when needed to guarantee stability or performance properties, PETC is capable of reducing the number of transmissions significantly, while still retaining a satisfactory closed-loop behavior. In this paper, we will study observer-based controllers for linear systems and propose advanced event-triggering mechanisms (ETMs) that will reduce communication in both the sensor-to-controller channels and the controller-to-actuator channels. By exploiting model-based computations, the new classes of ETMs will outperform existing ETMs in the literature. To model and analyze the proposed classes of ETMs, we present two frameworks based on perturbed linear and piecewise linear systems, leading to conditions for global exponential stability and @?"2-gain performance of the resulting closed-loop systems in terms of linear matrix inequalities. The proposed analysis frameworks can be used to make tradeoffs between the network utilization on the one hand and the performance in terms of @?"2-gains on the other. In addition, we will show that the closed-loop performance realized by an observer-based controller, implemented in a conventional periodic time-triggered fashion, can be recovered arbitrarily closely by a PETC implementation. This provides a justification for emulation-based design. Next to centralized model-based ETMs, we will also provide a decentralized setup suitable for large-scale systems, where sensors and actuators are physically distributed over a wide area. The improvements realized by the proposed model-based ETMs will be demonstrated using numerical examples.
Affective social robots For human-robot interaction to proceed in a smooth, natural manner, robots must adhere to human social norms. One such human convention is the use of expressive moods and emotions as an integral part of social interaction. Such expressions are used to convey messages such as ''I'm happy to see you'' or ''I want to be comforted,'' and people's long-term relationships depend heavily on shared emotional experiences. Thus, we have developed an affective model for social robots. This generative model attempts to create natural, human-like affect and includes distinctions between immediate emotional responses, the overall mood of the robot, and long-term attitudes toward each visitor to the robot, with a focus on developing long-term human-robot relationships. This paper presents the general affect model as well as particular details of our implementation of the model on one robot, the Roboceptionist. In addition, we present findings from two studies that demonstrate the model's potential.
Rich Models for Steganalysis of Digital Images We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary $\\pm {\\hbox{1}}$ embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.
Heterogeneous ensemble for feature drifts in data streams The nature of data streams requires classification algorithms to be real-time, efficient, and able to cope with high-dimensional data that are continuously arriving. It is a known fact that in high-dimensional datasets, not all features are critical for training a classifier. To improve the performance of data stream classification, we propose an algorithm called HEFT-Stream (H eterogeneous E nsemble with F eature drifT for Data Streams ) that incorporates feature selection into a heterogeneous ensemble to adapt to different types of concept drifts. As an example of the proposed framework, we first modify the FCBF [13] algorithm so that it dynamically update the relevant feature subsets for data streams. Next, a heterogeneous ensemble is constructed based on different online classifiers, including Online Naive Bayes and CVFDT [5]. Empirical results show that our ensemble classifier outperforms state-of-the-art ensemble classifiers (AWE [15] and OnlineBagging [21]) in terms of accuracy, speed, and scalability. The success of HEFT-Stream opens new research directions in understanding the relationship between feature selection techniques and ensemble learning to achieve better classification performance.
Orientation-aware RFID tracking with centimeter-level accuracy. RFID tracking attracts a lot of research efforts in recent years. Most of the existing approaches, however, adopt an orientation-oblivious model. When tracking a target whose orientation changes, those approaches suffer from serious accuracy degradation. In order to achieve target tracking with pervasive applicability in various scenarios, we in this paper propose OmniTrack, an orientation-aware RFID tracking approach. Our study discovers the linear relationship between the tag orientation and the phase change of the backscattered signals. Based on this finding, we propose an orientation-aware phase model to explicitly quantify the respective impact of the read-tag distance and the tag's orientation. OmniTrack addresses practical challenges in tracking the location and orientation of a mobile tag. Our experimental results demonstrate that OmniTrack achieves centimeter-level location accuracy and has significant advantages in tracking targets with varing orientations, compared to the state-of-the-art approaches.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.22
0.22
0.22
0.22
0.22
0.22
0.073333
0.003542
0
0
0
0
0
0
Traffic Speed Prediction: An Attention-Based Method. Short-term traffic speed prediction has become one of the most important parts of intelligent transportation systems (ITSs). In recent years, deep learning methods have demonstrated their superiority both in accuracy and efficiency. However, most of them only consider the temporal information, overlooking the spatial or some environmental factors, especially the different correlations between the target road and the surrounding roads. This paper proposes a traffic speed prediction approach based on temporal clustering and hierarchical attention (TCHA) to address the above issues. We apply temporal clustering to the target road to distinguish the traffic environment. Traffic data in each cluster have a similar distribution, which can help improve the prediction accuracy. A hierarchical attention-based mechanism is then used to extract the features at each time step. The encoder measures the importance of spatial features, and the decoder measures the temporal ones. The proposed method is evaluated over the data of a certain area in Hangzhou, and experiments have shown that this method can outperform the state of the art for traffic speed prediction.
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
A Survey of the Usages of Deep Learning for Natural Language Processing Over the last several years, the field of natural language processing has been propelled forward by an explosion in the use of deep learning models. This article provides a brief introduction to the field and a quick overview of deep learning architectures and methods. It then sifts through the plethora of recent studies and summarizes a large assortment of relevant contributions. Analyzed research areas include several core linguistic processing issues in addition to many applications of computational linguistics. A discussion of the current state of the art is then provided along with recommendations for future research in the field.
Estimation of missing values in heterogeneous traffic data: Application of multimodal deep learning model With the development of sensing technology, a large amount of heterogeneous traffic data can be collected. However, the raw data often contain corrupted or missing values, which need to be imputed to aid traffic condition monitoring and the assessment of the system performance. Several existing studies have reported imputation models used to impute the missing values, and most of these models aimed to capture the spatial or temporal dependencies. However, the dependencies of the heterogeneous data were ignored. To this end, we propose a multimodal deep learning model to enable heterogeneous traffic data imputation. The model involves the use of two parallel stacked autoencoders that can simultaneously consider the spatial and temporal dependencies. In addition, a latent feature fusion layer is developed to capture the dependencies of the heterogeneous traffic data. To train the proposed imputation model, a hierarchical training method is introduced. Using a real world dataset, the performance of the proposed model is evaluated and compared with that of several widely used temporal imputation models, spatial imputation models, and spatial–temporal imputation models. The experimental and evaluation results indicate that the values of the evaluation criteria of the proposed model are smaller, indicating a better performance. The results also show that the proposed model can accurately impute the continuously missing data. Furthermore, the sensitivity of the parameters used in the proposed deep multimodal deep learning model is investigated. This study clearly demonstrates the effectiveness of deep learning for heterogeneous traffic data synthesis and missing data imputation. The dependencies of the heterogeneous traffic data should be considered in future studies to improve the performance of the imputation model.
Short-Term Traffic Prediction Using Long Short-Term Memory Neural Networks Short-term traffic prediction allows Intelligent Transport Systems to proactively respond to events before they happen. With the rapid increase in the amount, quality, and detail of traffic data, new techniques are required that can exploit the information in the data in order to provide better results while being able to scale and cope with increasing amounts of data and growing cities. We propose and compare three models for short-term road traffic density prediction based on Long Short-Term Memory (LSTM) neural networks. We have trained the models using real traffic data collected by Motorway Control System in Stockholm that monitors highways and collects flow and speed data per lane every minute from radar sensors. In order to deal with the challenge of scale and to improve prediction accuracy, we propose to partition the road network into road stretches and junctions, and to model each of the partitions with one or more LSTM neural networks. Our evaluation results show that partitioning of roads improves the prediction accuracy by reducing the root mean square error by the factor of 5. We show that we can reduce the complexity of LSTM network by limiting the number of input sensors, on average to 35% of the original number, without compromising the prediction accuracy.
Flow Prediction in Spatio-Temporal Networks Based on Multitask Deep Learning. Predicting flows (e.g., the traffic of vehicles, crowds, and bikes), consisting of the in-out traffic at a node and transitions between different nodes, in a spatio-temporal network plays an important role in transportation systems. However, this is a very challenging problem, affected by multiple complex factors, such as the spatial correlation between different locations, temporal correlation am...
A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize. An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees' swarming around their hive is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used for optimizing multivariable functions and the results produced by ABC, Genetic Algorithm (GA), Particle Swarm Algorithm (PSO) and Particle Swarm Inspired Evolutionary Algorithm (PS-EA) have been compared. The results showed that ABC outperforms the other algorithms.
Local control strategies for groups of mobile autonomous agents The problem of achieving a specified formation among a group of mobile autonomous agents by distributed control is studied. If convergence to a point is feasible, then more general formations are achievable too, so the focus is on convergence to a point (the agreement problem). Three formation strategies are studied and convergence is proved under certain conditions. Also, motivated by the question of whether collisions occur, formation evolution is studied.
Redundancy, Efficiency and Robustness in Multi-Robot Coverage ó Area coverage is an important task for mobile robots, with many real-world applications. Motivated by poten- tial efciency and robustness improvements, there is growing interest in the use of multiple robots in coverage. Previous investigations of multi-robot coverage focuses on completeness and eliminating redundancy, but does not formally address ro- bustness, nor examine the impact of the initial positions of robots on the coverage time. Indeed, a common assumption is that non-redundancy leads to improved coverage time. We address robustness and efciency in a family of multi-robot coverage algorithms, based on spanning-tree coverage of approximate cell decomposition. We analytically show that the algorithms are robust, in that as long as a single robot is able to move, the coverage will be completed. We also show that non-redundant (non-backtracking) versions of the algorithms have a worst-case coverage time virtually identical to that of a single robotó thus no performance gain is guaranteed in non-redundant coverage. Moreover, this worst-case is in fact common in real- world applications. Surprisingly, however, redundant coverage algorithms lead to guaranteed performance which halves the coverage time even in the worst case. produces a path that completely covers the work-area. We want multi-robot algorithms to be not only complete, but also efcient (in that they minimize the time it takes to cover the area), non-backtracking (in that any portion of the work area is covered only once), and robust (in that they can handle catastrophic robot failures). Previous investigations that examine the use of multiple robots in coverage mostly focus on completeness and non- backtracking. However, much of previous work does not formally consider robustness. Moreover, while completeness and non-backtracking properties are sufcient to show that a single-robot coverage algorithm is also efcient (in cov- erage time), it turns out that this is not true in the general case. Surprisingly, in multi-robot coverage, non-backtracking and efcienc y are independent optimization criteria: Non- backtracking algorithms may be inefcient, and efcient algorithms may be backtracking. Finally, the initial position of robots in the work-area signicantly affects the comple- tion time of the coverage, both in backtracking and non- backtracking algorithms. Yet no bounds are known for the coverage completion time, as a function of the number of robots and their initial placement. This paper examines robustness and efcienc y in multi- robot coverage. We focus on coverage using a map of the work-area (known as off-line coverage (1)). We assume the tool to be a square of size D. The work-area is then approximately decomposed into cells, where each cell is a square of size 4D, i.e., a square of four tool-size sub-cells. As with other approximate cell-decomposition approaches ((1)), cells that are partially coveredóby obstacles or the bounds of the work-areaóare discarded from consideration. We use an algorithm based on a spanning-tree to extract a path that visits all sub-cells. Previous work on generating such a path (called STC for Spanning-Tree Coverage) have shown it to be complete and non-backtracking (3). We present a family of novel algorithms, called MSTC (Multirobot Spanning-Tree Coverage) that address robustness and efcienc y. First, we construct a non-backtracking MSTC algorithm that is guaranteed to be robust: It guarantees that the work-area will be completely covered in nite time, as long as at least a single robot is functioning correctly. We
Online Coordinated Charging Decision Algorithm for Electric Vehicles Without Future Information The large-scale integration of plug-in electric vehicles (PEVs) to the power grid spurs the need for efficient charging coordination mechanisms. It can be shown that the optimal charging schedule smooths out the energy consumption over time so as to minimize the total energy cost. In practice, however, it is hard to smooth out the energy consumption perfectly, because the future PEV charging demand is unknown at the moment when the charging rate of an existing PEV needs to be determined. In this paper, we propose an online coordinated charging decision (ORCHARD) algorithm, which minimizes the energy cost without knowing the future information. Through rigorous proof, we show that ORCHARD is strictly feasible in the sense that it guarantees to fulfill all charging demands before due time. Meanwhile, it achieves the best known competitive ratio of 2.39. By exploiting the problem structure, we propose a novel reduced-complexity algorithm to replace the standard convex optimization techniques used in ORCHARD. Through extensive simulations, we show that the average performance gap between ORCHARD and the offline optimal solution, which utilizes the complete future information, is as small as 6.5%. By setting a proper speeding factor, the average performance gap can be further reduced to 5%.
SmartVeh: Secure and Efficient Message Access Control and Authentication for Vehicular Cloud Computing. With the growing number of vehicles and popularity of various services in vehicular cloud computing (VCC), message exchanging among vehicles under traffic conditions and in emergency situations is one of the most pressing demands, and has attracted significant attention. However, it is an important challenge to authenticate the legitimate sources of broadcast messages and achieve fine-grained message access control. In this work, we propose SmartVeh, a secure and efficient message access control and authentication scheme in VCC. A hierarchical, attribute-based encryption technique is utilized to achieve fine-grained and flexible message sharing, which ensures that vehicles whose persistent or dynamic attributes satisfy the access policies can access the broadcast message with equipped on-board units (OBUs). Message authentication is enforced by integrating an attribute-based signature, which achieves message authentication and maintains the anonymity of the vehicles. In order to reduce the computations of the OBUs in the vehicles, we outsource the heavy computations of encryption, decryption and signing to a cloud server and road-side units. The theoretical analysis and simulation results reveal that our secure and efficient scheme is suitable for VCC.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0.025
0
0
0
0
0
0
Flight Test of the Novel Fixed-Wing Multireference Multiscale LN Guidance Logic for Complex Path Following This paper presents the flight test verification and validation of a novel multi-reference longitudinal and lateral-directional guidance logic across multiple autonomous aircraft with distinctly different dynamics. LN guidance logic takes advantage of tracking a span of multiple references across the desired track, similar to an insect’s compound eyes, allowing for a much higher temporal resolution and more predictive and anticipatory guidance logic. The stability of LN guidance algorithms is investigated using global Lyapunov stability theorems and its stability is proved. Validation and verification flight tests of LN lateral-directional guidance logic demonstrate superior tracking performance adaptability of this novel method compared to other methods such as L1 or $${{{L}_{2}}^{+}}$$ . The LN longitudinal guidance is newly developed in this paper and shows excellent altitude tracking capabilities and similarly scalable. Statistical analysis of multiple flight tests involving varying environmental states with three unique aircraft and different advanced flight controllers are shown to prove the capabilities, robustness, and consistency of LN guidance.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Fast Charging Scheduling under the Nonlinear Superposition Model with Adjustable Phases. Wireless energy transfer has been widely studied in recent decades, with existing works mainly focused on maximizing network lifetime, optimizing charging efficiency, and optimizing charging quality. All these works use a charging model with the linear superposition, which may not be the most accurate. We apply a nonlinear superposition model, and we consider the Fast Charging Scheduling problem (FCS): Given multiple chargers and a group of sensors, how can the chargers be optimally scheduled over the time dimension so that the total charging time is minimized and each sensor has at least energy E? We prove that FCS is NP-complete and propose a 2-approximation algorithm to solve it in one-dimensional (1D) line. In a 2D plane, we first consider a special case of FCS, where the initial phases of all chargers are the same, and propose an algorithm to solve it, which has a bound. Then we propose an algorithm to solve FCS in a general 2D plane. Unlike other algorithms, our algorithm does not need to calculate the combined energy of every possible combination of chargers in advance, which greatly reduces the complexity. Extensive simulations demonstrate that the performance of our algorithm performs almost as good as the optimal algorithm.
RFID-based techniques for human-activity detection The iBracelet and the Wireless Identification and Sensing Platform promise the ability to infer human activity directly from sensor readings.
Providing DoS resistance for signature-based broadcast authentication in sensor networks Recent studies have demonstrated that it is feasible to perform public key cryptographic operations on resource-constrained sensor platforms. However, the significant energy consumption introduced by public key operations makes any public key-based protocol an easy target of Denial-of-Service (DoS) attacks. For example, if digital signature schemes such as ECDSA are used directly for broadcast authentication without further protection, an attacker can simply broadcast fake messages and force the receiving nodes to perform a huge number of unnecessary signature verifications, eventually exhausting their battery power. This paper shows how to mitigate such DoS attacks when digital signatures are used for broadcast authentication in sensor networks. Specifically, this paper first presents two filtering techniques, the group-based filter and the key chain-based filter, to handle the DoS attacks against signature verification. Both methods can significantly reduce the number of unnecessary signature verifications when a sensor node is under DoS attacks. This paper then combines these two filters and proposes a hybrid solution to further improve the performance.
On Resilience and Connectivity of Secure Wireless Sensor Networks Under Node Capture Attacks. Despite much research on probabilistic key predistribution schemes for wireless sensor networks over the past decade, few formal analyses exist that define schemes’ resilience to node-capture attacks precisely and under realistic conditions. In this paper, we analyze the resilience of the $q$ -composite key predistribution scheme, which mitigates the node capture vulnerability of the Eschenauer-Gligor scheme in the neighbor discovery phase. We derive scheme parameters to have a desired level of resiliency, and obtain optimal parameters that defend against different adversaries as much as possible. We also show that this scheme can be easily enhanced to achieve the same “perfect resilience” property as in the random pairwise key predistribution for attacks launched after neighbor discovery. Despite considerable attention to this scheme, much prior work explicitly or implicitly uses an incorrect computation for the probability of link compromise under node-capture attacks and ignores the real-world transmission constraints of sensor nodes. Moreover, we derive the critical network parameters to ensure connectivity in both the absence and presence of node-capture attacks. We also investigate node replication attacks by analyzing the adversary’s optimal strategy.
Radiation Constrained Wireless Charger Placement Wireless Power Transfer has become a commercially viable technology to charge devices because of the convenience of no power wiring and the reliability of continuous power supply. This paper concerns the fundamental issue of wireless charger placement with electromagnetic radiation (EMR) safety. Although there are a few wireless charging schemes consider EMR safety, none of them addresses the charger placement issue. In this paper, we propose PESA, a wireless charger Placement scheme that guarantees EMR SAfety for every location on the plane. First, we discretize the whole charging area and formulate the problem into the Multidimensional 0/1 Knapsack (MDK) problem. Second, we propose a fast approximation algorithm to the MDK problem. Third, we propose a near optimal scheme to improve speed by double partitioning the area. We prove that the output of our algorithm is better than $(1-\epsilon)$ of the optimal solution to PESA with a smaller EMR threshold $(1-\epsilon /2)R_{t}$ and a larger EMR coverage radius $(1+\epsilon /2)D$ . We conducted both simulations and field experiments to evaluate the performance of our scheme. Our experimental results show that in terms of charging utility, our algorithm outperforms the comparison algorithms.
CoDoC: A Novel Attack for Wireless Rechargeable Sensor Networks through Denial of Charge Wireless rechargeable sensor networks (WRSNs), benefiting from recent breakthrough in wireless power transfer (WPT) technology, emerge as very promising for network lifetime extension. Traditional methods focus on scheduling algorithms and system optimization, and the issue of charging security/threat is ignored, causing it vulnerable to attacks. In this paper, we develop a novel attack for WRSN through Denial of Charge (DoC) aiming at maximizing destructiveness. At first, we form a generalized on-demand charging model, which provides fundamental basis for designing charging attacks. Then a request prediction method (RPM) is introduced for predicting the emergences of charging requests. Afterwards, a Collaborative DoC attacking algorithm (CoDoC) is developed, which tempers/modifies and generates fake charging requests, yielding normal nodes exhausted. Finally, to demonstrate the outperformed features of CoDoC, extensive simulations and test-bed experiments are conducted. The results show that, CoDoC outperforms in making sensor exhausted as well as causing missing events.
Fuzzy logic in control systems: fuzzy logic controller. I.
An introduction to ROC analysis Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
A communication robot in a shopping mall This paper reports our development of a communication robot for use in a shopping mall to provide shopping information, offer route guidance, and build rapport. In the development, the major difficulties included sensing human behaviors, conversation in a noisy daily environment, and the needs of unexpected miscellaneous knowledge in the conversation. We chose a networkrobot system approach, where a single robot's poor sensing capability and knowledge are supplemented by ubiquitous sensors and a human operator. The developed robot system detects a person with floor sensors to initiate interaction, identifies individuals with radio-frequency identification (RFID) tags, gives shopping information while chatting, and provides route guidance with deictic gestures. The robotwas partially teleoperated to avoid the difficulty of speech recognition as well as to furnish a new kind of knowledge that only humans can flexibly provide. The information supplied by a human operator was later used to increase the robot's autonomy. For 25 days in a shopping mall, we conducted a field trial and gathered 2642 interactions. A total of 235 participants signed up to use RFID tags and, later, provided questionnaire responses. The questionnaire results are promising in terms of the visitors' perceived acceptability as well as the encouragement of their shopping activities. The results of the teleoperation analysis revealed that the amount of teleoperation gradually decreased, which is also promising.
Minimum acceleration criterion with constraints implies bang-bang control as an underlying principle for optimal trajectories of arm reaching movements. Rapid arm-reaching movements serve as an excellent test bed for any theory about trajectory formation. How are these movements planned? A minimum acceleration criterion has been examined in the past, and the solution obtained, based on the Euler-Poisson equation, failed to predict that the hand would begin and end the movement at rest (i.e., with zero acceleration). Therefore, this criterion was rejected in favor of the minimum jerk, which was proved to be successful in describing many features of human movements. This letter follows an alternative approach and solves the minimum acceleration problem with constraints using Pontryagin's minimum principle. We use the minimum principle to obtain minimum acceleration trajectories and use the jerk as a control signal. In order to find a solution that does not include nonphysiological impulse functions, constraints on the maximum and minimum jerk values are assumed. The analytical solution provides a three-phase piecewise constant jerk signal (bang-bang control) where the magnitude of the jerk and the two switching times depend on the magnitude of the maximum and minimum available jerk values. This result fits the observed trajectories of reaching movements and takes into account both the extrinsic coordinates and the muscle limitations in a single framework. The minimum acceleration with constraints principle is discussed as a unifying approach for many observations about the neural control of movements.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
A robust medical image watermarking against salt and pepper noise for brain MRI images. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. During the transmission of medical images between hospitals or specialists through the network, the main priority is to protect a patient's documents against any act of tampering by unauthorised individuals. Because of this, there is a need for medical image authentication scheme to enable proper diagnosis on patient. In addition, medical images are also susceptible to salt and pepper impulse noise through the transmission in communication channels. This noise may also be intentionally used by the invaders to corrupt the embedded watermarks inside the medical images. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this work addresses the issue of designing a new watermarking method that can withstand high density of salt and pepper noise for brain MRI images. For this purpose, combination of a spatial domain watermarking method, channel coding and noise filtering schemes are used. The region of non-interest (RONI) of MRI images from five different databases are used as embedding area and electronic patient record (EPR) is considered as embedded data. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER).
A Hierarchical Architecture Using Biased Min-Consensus for USV Path Planning This paper proposes a hierarchical architecture using the biased min-consensus (BMC) method, to solve the path planning problem of unmanned surface vessel (USV). We take the fixed-point monitoring mission as an example, where a series of intermediate monitoring points should be visited once by USV. The whole framework incorporates the low-level layer planning the standard path between any two intermediate points, and the high-level fashion determining their visiting sequence. First, the optimal standard path in terms of voyage time and risk measure is planned by the BMC protocol, given that the corresponding graph is constructed with node state and edge weight. The USV will avoid obstacles or keep a certain distance safely, and arrive at the target point quickly. It is proven theoretically that the state of the graph will converge to be stable after finite iterations, i.e., the optimal solution can be found by BMC with low calculation complexity. Second, by incorporating the constraint of intermediate points, their visiting sequence is optimized by BMC again with the reconstruction of a new virtual graph based on the former planned results. The extensive simulation results in various scenarios also validate the feasibility and effectiveness of our method for autonomous navigation.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Combining Global and Local Surrogate Models to Accelerate Evolutionary Optimization In this paper, we present a novel surrogate-assisted evolutionary optimization framework for solving computationally expensive problems. The proposed framework uses computationally cheap hierarchical surrogate models constructed through online learning to replace the exact computationally expensive objective functions during evolutionary search. At the first level, the framework employs a data-parallel Gaussian process based global surrogate model to filter the evolutionary algorithm (EA) population of promising individuals. Subsequently, these potential individuals undergo a memetic search in the form of Lamarckian learning at the second level. The Lamarckian evolution involves a trust-region enabled gradient-based search strategy that employs radial basis function local surrogate models to accelerate convergence. Numerical results are presented on a series of benchmark test functions and on an aerodynamic shape design problem. The results obtained suggest that the proposed optimization framework converges to good designs on a limited computational budget. Furthermore, it is shown that the new algorithm gives significant savings in computational cost when compared to the traditional evolutionary algorithm and other surrogate assisted optimization frameworks
Multiobjective Optimization Models for Locating Vehicle Inspection Stations Subject to Stochastic Demand, Varying Velocity and Regional Constraints Deciding an optimal location of a transportation facility and automotive service enterprise is an interesting and important issue in the area of facility location allocation (FLA). In practice, some factors, i.e., customer demands, allocations, and locations of customers and facilities, are changing, and thus, it features with uncertainty. To account for this uncertainty, some researchers have addressed the stochastic time and cost issues of FLA. A new FLA research issue arises when decision makers want to minimize the transportation time of customers and their transportation cost while ensuring customers to arrive at their desired destination within some specific time and cost. By taking the vehicle inspection station as a typical automotive service enterprise example, this paper presents a novel stochastic multiobjective optimization to address it. This work builds two practical stochastic multiobjective programs subject to stochastic demand, varying velocity, and regional constraints. A hybrid intelligent algorithm integrating stochastic simulation and multiobjective teaching-learning-based optimization algorithm is proposed to solve the proposed programs. This approach is applied to a real-world location problem of a vehicle inspection station in Fushun, China. The results show that this is able to produce satisfactory Pareto solutions for an actual vehicle inspection station location problem.
Intrinsic dimension estimation: Advances and open problems. •The paper reviews state-of-the-art of the methods of Intrinsic Dimension (ID) Estimation.•The paper defines the properties that an ideal ID estimator should have.•The paper reviews, under the above mentioned framework, the major ID estimation methods underlining their advances and the open problems.
Alignment-Supervised Bidimensional Attention-Based Recursive Autoencoders for Bilingual Phrase Representation. Exploiting semantic interactions between the source and target linguistic items at different levels of granularity is crucial for generating compact vector representations for bilingual phrases. To achieve this, we propose alignment-supervised bidimensional attention-based recursive autoencoders (ABattRAE) in this paper. ABattRAE first individually employs two recursive autoencoders to recover hierarchical tree structures of bilingual phrase, and treats the subphrase covered by each node on the tree as a linguistic item. Unlike previous methods, ABattRAE introduces a bidimensional attention network to measure the semantic matching degree between linguistic items of different languages, which enables our model to integrate information from all nodes by dynamically assigning varying weights to their corresponding embeddings. To ensure the accuracy of the generated attention weights in the attention network, ABattRAE incorporates word alignments as supervision signals to guide the learning procedure. Using the general stochastic gradient descent algorithm, we train our model in an end-to-end fashion, where the semantic similarity of translation equivalents is maximized while the semantic similarity of nontranslation pairs is minimized. Finally, we incorporate a semantic feature based on the learned bilingual phrase representations into a machine translation system for better translation selection. Experimental results on NIST Chinese–English and WMT English–German test sets show that our model achieves substantial improvements of up to 2.86 and 1.09 BLEU points over the baseline, respectively. Extensive in-depth analyses demonstrate the superiority of our model in learning bilingual phrase embeddings.
Surrogate-Assisted Evolutionary Framework for Data-Driven Dynamic Optimization Recently, dynamic optimization has received much attention from the swarm and evolutionary computation community. However, few studies have investigated data-driven evolutionary dynamic optimization, and most algorithms for evolutionary dynamic optimization are based on analytical mathematical functions. In this paper, we investigate data-driven evolutionary dynamic optimization. First, we develop a surrogate-assisted evolutionary framework for solving data-driven dynamic optimization problems (DD-DOPs). Second, we employ a benchmark based on the typical dynamic optimization problems set in order to verify the performance of the proposed framework. The experimental results demonstrate that the proposed framework is effective for solving DD-DOPs.
Biobjective Task Scheduling for Distributed Green Data Centers The industry of data centers is the fifth largest energy consumer in the world. Distributed green data centers (DGDCs) consume 300 billion kWh per year to provide different types of heterogeneous services to global users. Users around the world bring revenue to DGDC providers according to actual quality of service (QoS) of their tasks. Their tasks are delivered to DGDCs through multiple Internet service providers (ISPs) with different bandwidth capacities and unit bandwidth price. In addition, prices of power grid, wind, and solar energy in different GDCs vary with their geographical locations. Therefore, it is highly challenging to schedule tasks among DGDCs in a high-profit and high-QoS way. This work designs a multiobjective optimization method for DGDCs to maximize the profit of DGDC providers and minimize the average task loss possibility of all applications by jointly determining the split of tasks among multiple ISPs and task service rates of each GDC. A problem is formulated and solved with a simulated-annealing-based biobjective differential evolution (SBDE) algorithm to obtain an approximate Pareto-optimal set. The method of minimum Manhattan distance is adopted to select a knee solution that specifies the Pareto-optimal task service rates and task split among ISPs for DGDCs in each time slot. Real-life data-based experiments demonstrate that the proposed method achieves lower task loss of all applications and larger profit than several existing scheduling algorithms. Note to Practitioners-This work aims to maximize the profit and minimize the task loss for DGDCs powered by renewable energy and smart grid by jointly determining the split of tasks among multiple ISPs. Existing task scheduling algorithms fail to jointly consider and optimize the profit of DGDC providers and QoS of tasks. Therefore, they fail to intelligently schedule tasks of heterogeneous applications and allocate infrastructure resources within their response time bounds. In this work, a new method that tackles drawbacks of existing algorithms is proposed. It is achieved by adopting the proposed SBDE algorithm that solves a multiobjective optimization problem. Simulation experiments demonstrate that compared with three typical task scheduling approaches, it increases profit and decreases task loss. It can be readily and easily integrated and implemented in real-life industrial DGDCs. The future work needs to investigate the real-time green energy prediction with historical data and further combine prediction and task scheduling together to achieve greener and even net-zero-energy data centers.
Neural Architecture Transfer Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture ...
Surrogate-Assisted Evolutionary Deep Learning Using an End-to-End Random Forest-Based Performance Predictor. Convolutional neural networks (CNNs) have shown remarkable performance in various real-world applications. Unfortunately, the promising performance of CNNs can be achieved only when their architectures are optimally constructed. The architectures of state-of-the-art CNNs are typically handcrafted with extensive expertise in both CNNs and the investigated data, which consequently hampers the widesp...
A review on interval type-2 fuzzy logic applications in intelligent control. A review of the applications of interval type-2 fuzzy logic in intelligent control has been considered in this paper. The fundamental focus of the paper is based on the basic reasons for using type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques.
Dyme: Dynamic Microservice Scheduling in Edge Computing Enabled IoT In recent years, the rapid development of mobile edge computing (MEC) provides an efficient execution platform at the edge for Internet-of-Things (IoT) applications. Nevertheless, the MEC also provides optimal resources to different microservices, however, underlying network conditions and infrastructures inherently affect the execution process in MEC. Therefore, in the presence of varying network conditions, it is necessary to optimally execute the available task of end users while maximizing the energy efficiency in edge platform and we also need to provide fair Quality-of-Service (QoS). On the other hand, it is necessary to schedule the microservices dynamically to minimize the total network delay and network price. Thus, in this article, unlike most of the existing works, we propose a dynamic microservice scheduling scheme for MEC. We design the microservice scheduling framework mathematically and also discuss the computational complexity of the scheduling algorithm. Extensive simulation results show that the microservice scheduling framework significantly improves the performance metrics in terms of total network delay, average price, satisfaction level, energy consumption rate (ECR), failure rate, and network throughput over other existing baselines.
Dynamic surface control for a class of nonlinear systems A method is proposed for designing controllers with arbitrarily small tracking error for uncertain, mismatched nonlinear systems in the strict feedback form. This method is another "synthetic input technique," similar to backstepping and multiple surface control methods, but with an important addition, /spl tau/-1 low pass filters are included in the design where /spl tau/ is the relative degree of the output to be controlled. It is shown that these low pass filters allow a design where the model is not differentiated, thus ending the complexity arising due to the "explosion of terms" that has made other methods difficult to implement in practice. The backstepping approach, while suffering from the problem of "explosion of terms" guarantees boundedness of tracking errors globally; however, the proposed approach, while being simpler to implement, can only guarantee boundedness of tracking error semiglobally, when the nonlinearities in the system are non-Lipschitz.
Resolving The Redundancy Of A Seven Dof Wearable Robotic System Based On Kinematic And Dynamic Constraint According to the seven degrees of freedom (DOFs) human arm model composed of the shoulder, elbow, and wrist joints, positioning of the wrist in space and orientating the palm is a task requiring only six DOFs. Due to this redundancy, a given task can be completed by multiple arm configurations, and there is no unique mathematical solution to the inverse kinematics. The redundancy of a wearable robotic system (exoskeleton) that interacts with the human is expected to be resolved in the same way as that of the human arm. A unique solution to the system's redundancy was introduced by combining both kinematic and dynamic criteria. The redundancy of the arm is expressed mathematically by defining the swivel angle: the rotation angle of the plane including the upper and lower arm around a virtual axis connecting the shoulder and wrist joints which are fixed in space. Two different swivel angles were generated based on kinematic and dynamic constraints. The kinematic criterion is to maximize the projection of the longest principle axis of the manipulability ellipsoid for the human arm on the vector connecting the wrist and the virtual target on the head region. The dynamic criterion is to minimize the mechanical work done in the joint space for each two consecutive points along the task space trajectory. These two criteria were then combined linearly with different weight factors for estimating the swivel angle. Post processing of experimental data collected with a motion capturing system indicated that by using the proposed synthesis of redundancy resolution criteria, the error between the predicted swivel angle and the actual swivel angle adopted by the motor control system was less then five degrees. This result outperformed the prediction based on a single criteria.
Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks. The use of immune algorithms is generally a time-intensive process—especially for problems with numerous variables. In the present paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm parallelized utilizing the message passing interface (MPI). The proposed algorithm comprises three layers: objective, group and individual layers. First, to tackle each objective in a multi-objective problem, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives simultaneously. Second, the numerous variables are divided into several groups. Finally, individual evaluations are allocated across many core processing units, and calculations are performed in parallel. Consequently, the computation time is greatly reduced. The proposed algorithm integrates the idea of immune algorithms, exploring sparse areas in the objective space, and uses simulated binary crossover for mutation. The proposed algorithm is employed to optimize the 3D terrain deployment of a wireless sensor network, which is a self-organization network. In our experiments, through comparisons with several state-of-the-art multi-objective evolutionary algorithms—the cooperative coevolutionary generalized differential evolution 3, the cooperative multi-objective differential evolution, the multi-objective evolutionary algorithm based on decision variable analyses and the nondominated sorting genetic algorithm III—the proposed algorithm addresses the deployment optimization problem efficiently and effectively.
Convert Harm Into Benefit: A Coordination-Learning Based Dynamic Spectrum Anti-Jamming Approach This paper mainly investigates the multi-user anti-jamming spectrum access problem. Using the idea of “converting harm into benefit,” the malicious jamming signals projected by the enemy are utilized by the users as the coordination signals to guide spectrum coordination. An “internal coordination-external confrontation” multi-user anti-jamming access game model is constructed, and the existence of Nash equilibrium (NE) as well as correlated equilibrium (CE) is demonstrated. A coordination-learning based anti-jamming spectrum access algorithm (CLASA) is designed to achieve the CE of the game. Simulation results show the convergence, and effectiveness of the proposed CLASA algorithm, and indicate that our approach can help users confront the malicious jammer, and coordinate internal spectrum access simultaneously without information exchange. Last but not least, the fairness of the proposed approach under different jamming attack patterns is analyzed, which illustrates that this approach provides fair anti-jamming spectrum access opportunities under complicated jamming pattern.
1.025935
0.022222
0.022222
0.022222
0.022222
0.022222
0.014861
0.010714
0.000006
0
0
0
0
0
Convolutional Two-Stream Network Fusion For Video Action Recognition Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.
Attention-based LSTM for Aspect-level Sentiment Classification.
What-and-Where to Match: Deep Spatially Multiplicative Integration Networks for Person Re-identification. •A novel deep architecture to emphasize common local patterns is proposed to learn flexible joint representations for person re-identification.•The proposed method introduces a multiplicative integration gating function to embed two convolutional features to their joint representations, which are effective in discriminating positive pairs from negative pairs.•Spatial dependencies are incorporated into feature learning to address the cross-view misalignment.•Extensive experiments and empirical analysis are provided in experimental part.
A novel algorithm for the automatic detection of sleep apnea from single-lead ECG Goal: This paper presents a methodology for the automatic detection of sleep apnea from single-lead ECG. Meth- ods: It uses two novel features derived from the ECG, and two well-known features in heart rate variability analysis, namely the standard deviation and the serial correlation coefficients of the RR interval time series. The first novel feature uses the principal components of the QRS complexes, and it describes changes in their morphology caused by an increased sympathetic activity during apnea. The second novel feature extracts the information shared between respiration and heart rate using orthogonal subspace projections. Respiratory information is derived from the ECG by means of three state-of-the-art algorithms, which are implemented and compared here. All features are used as input to a least-squares support vector machines (LS-SVM) classifier, using an RBF kernel. In total, 80 ECG recordings were included in the study. Results: Accuracies of about 85% are achieved on a minute-by-minute basis, for two independent datasets including both hypopneas and apneas together. Separation between apnea and normal recordings is achieved with 100% accuracy. In addition to apnea classification, the proposed methodology determines the contamination level of each ECG minute. Conclusion: The performances achieved are comparable with those reported in the literature for fully automated algorithms. Significance: These results indicate that the use of only ECG sensors can achieve good accuracies in the detection of sleep apnea. Moreover, the contamination level of each ECG segment can be used to automatically detect artefacts, and to highlight segments that require further visual inspection.
Automated Sleep Apnea Detection in Raw Respiratory Signals using Long Short-Term Memory Neural Networks. Sleep apnea is one of the most common sleep disorders and the consequences of undiagnosed sleep apnea can be very severe, ranging from increased blood pressure to heart failure. However, many people are often unaware of their condition. The gold standard for diagnosing sleep apnea is an overnight polysomnography in a dedicated sleep laboratory. Yet, these tests are expensive and beds are limited as trained staff needs to analyze the entire recording. An automated detection method would allow a faster diagnosis and more patients to be analyzed. Most algorithms for automated sleep apnea detection use a set of human-engineered features, potentially missing important sleep apnea markers. In this work, we present an algorithm based on state-of-the-art deep learning models for automatically extracting features and detecting sleep apnea events in respiratory signals. The algorithm is evaluated on the Sleep-Heart-Health-Study-1 dataset and provides per-epoch sensitivity and specificity scores comparable to the state-of-the-art. Furthermore, when these predictions are mapped to the apnea-hypopnea-index, a considerable improvement in per-patient scoring is achieved over conventional methods. This work presents a powerful aid for trained staff to quickly diagnose sleep apnea.
Image registration by local approximation methods Image registration is approached as an approximation problem. Two locally sensitive transformation functions are proposed for image registration. These transformation functions are obtained by the weighted least-squares method and the local weighted mean method. The former is a global method and uses information about all control points to establish correspondence between local areas in the images; nearby control points are, however, given higher weights to make the process locally sensitive. The latter is a local method and uses information about local control points only to register local areas in the images.
The Relationship Between the Traffic Flow and the Look-Ahead Cruise Control. There is a relationship between the traffic flow and the look-ahead control; they strongly interact with each other. Thus, this paper develops a design method for the look-ahead control, in which the influences of the traffic flow are considered. A sensitivity analysis of the parameter variation in the look-ahead control is also performed. If the traffic information is also considered in the look-ahead control, an undesirable side effect on the traffic flow may occur. An optimization method is also developed in order to calculate the optimum speed, which handles the individual vehicle energy optimization and its impact on the traffic flow. The method is illustrated through a complex simulation example based on the CarSim software.
Data-Driven Estimation of Driver Attention Using Calibration-Free Eye Gaze and Scene Features Driver attention estimation is one of the key technologies for intelligent vehicles. The existing related methods only focus on the scene image or the driver&#39;s gaze or head pose. The purpose of this article is to propose a more reasonable and feasible method based on a dual-view scene with calibration-free gaze direction. According to human visual mechanisms, the low-level features, static visual ...
Artificial intelligence test: a case study of intelligent vehicles. To meet the urgent requirement of reliable artificial intelligence applications, we discuss the tight link between artificial intelligence and intelligence test in this paper. We highlight the role of tasks in intelligence test for all kinds of artificial intelligence. We explain the necessity and difficulty of describing tasks for intelligence test, checking all the tasks that may encounter in intelligence test, designing simulation-based test, and setting appropriate test performance evaluation indices. As an example, we present how to design reliable intelligence test for intelligent vehicles. Finally, we discuss the future research directions of intelligence test.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Development and Control of a ‘Soft-Actuated’ Exoskeleton for Use in Physiotherapy and Training Full or partial loss of function in the upper limb is an increasingly common due to sports injuries, occupational injuries, spinal cord injuries, and strokes. Typically treatment for these conditions relies on manipulative physiotherapy procedures which are extremely labour intensive. Although mechanical assistive device exist for limbs this is rare for the upper body.In this paper we describe the construction and testing of a seven degree of motion prototype upper arm training/rehabilitation (exoskeleton) system. The total weight of the uncompensated orthosis is less than 2 kg. This low mass is primarily due to the use of a new range of pneumatic Muscle Actuators (pMA) as power source for the system. This type of actuator, which has also an excellent power/weight ratio, meets the need for safety, simplicity and lightness. The work presented shows how the system takes advantage of the inherent controllable compliance to produce a unit that is extremely powerful, providing a wide range of functionality (motion and forces over an extended range) in a manner that has high safety integrity for the patient. A training control scheme is introduced which is used to control the orthosis when used as exercise facility. Results demonstrate the potential of the device as an upper limb training, rehabilitation and power assist (exoskeleton) system.
The role of KL divergence in anomaly detection We study the role of Kullback-Leibler divergence in the framework of anomaly detection, where its abilities as a statistic underlying detection have never been investigated in depth. We give an in-principle analysis of network attack detection, showing explicitly attacks may be masked at minimal cost through 'camouflage'. We illustrate on both synthetic distributions and ones taken from real traffic.
Explanations and Expectations: Trust Building in Automated Vehicles. Trust is a vital determinant of acceptance of automated vehicles (AVs) and expectations and explanations are often at the heart of any trusting relationship. Once expectations have been violated, explanations are needed to mitigate the damage. This study introduces the importance of timing of explanations in promoting trust in AVs. We present the preliminary results of a within-subjects experimental study involving eight participants exposed to four AV driving conditions (i.e. 32 data points). Preliminary results show a pattern that suggests that explanations provided before the AV takes actions promote more trust than explanations provided afterward.
Design and Validation of a Cable-Driven Asymmetric Back Exosuit Lumbar spine injuries caused by repetitive lifting rank as the most prevalent workplace injury in the United States. While these injuries are caused by both symmetric and asymmetric lifting, asymmetric is often more damaging. Many back devices do not address asymmetry, so we present a new system called the Asymmetric Back Exosuit (ABX). The ABX addresses this important gap through unique design geometry and active cable-driven actuation. The suit allows the user to move in a wide range of lumbar trajectories while the “X” pattern cable routing allows variable assistance application for these trajectories. We also conducted a biomechanical analysis in OpenSim to map assistive cable force to effective lumbar torque assistance for a given trajectory, allowing for intuitive controller design in the lumbar joint space over the complex kinematic chain for varying lifting techniques. Human subject experiments illustrated that the ABX reduced lumbar erector spinae muscle activation during symmetric and asymmetric lifting by an average of 37.8% and 16.0%, respectively, compared to lifting without the exosuit. This result indicates the potential for our device to reduce lumbar injury risk.
1.088889
0.08
0.08
0.04
0.02
0.013333
0.013333
0.013333
0.002222
0
0
0
0
0
An Improved Intrusion Detection Algorithm Based on GA and SVM. In the era of big data, with the increasing number of audit data features, human-centered smart intrusion detection system performance is decreasing in training time and classification accuracy, and many support vector machine (SVM)-based intrusion detection algorithms have been widely used to identify an intrusion quickly and accurately. This paper proposes the FWP-SVM-genetic algorithm (GA) (feature selection, weight, and parameter optimization of support vector machine based on the genetic algorithm) based on the characteristics of the GA and the SVM algorithm. The algorithm first optimizes the crossover probability and mutation probability of GA according to the population evolution algebra and fitness value; then, it subsequently uses a feature selection method based on the genetic algorithm with an innovation in the fitness function that decreases the SVM error rate and increases the true positive rate. Finally, according to the optimal feature subset, the feature weights and parameters of SVM are simultaneously optimized. The simulation results show that the algorithm accelerates the algorithm convergence, increases the true positive rate, decreases the error rate, and shortens the classification time. Compared with other SVM-based intrusion detection algorithms, the detection rate is higher and the false positive and false negative rates are lower.
GMDH-based networks for intelligent intrusion detection. Network intrusion detection has been an area of rapid advancement in recent times. Similar advances in the field of intelligent computing have led to the introduction of several classification techniques for accurately identifying and differentiating network traffic into normal and anomalous. Group Method for Data Handling (GMDH) is one such supervised inductive learning approach for the synthesis of neural network models. Through this paper, we propose a GMDH-based technique for classifying network traffic into normal and anomalous. Two variants of the technique, namely, Monolithic and Ensemble-based, were tested on the KDD-99 dataset. The dataset was preprocessed and all features were ranked based on three feature ranking techniques, namely, Information Gain, Gain Ratio, and GMDH by itself. The results obtained proved that the proposed intrusion detection scheme yields high attack detection rates, nearly 98%, when compared with other intelligent classification techniques for network intrusion detection.
Mining network data for intrusion detection through combining SVMs with ant colony networks. In this paper, we introduce a new machine-learning-based data classification algorithm that is applied to network intrusion detection. The basic task is to classify network activities (in the network log as connection records) as normal or abnormal while minimizing misclassification. Although different classification models have been developed for network intrusion detection, each of them has its strengths and weaknesses, including the most commonly applied Support Vector Machine (SVM) method and the Clustering based on Self-Organized Ant Colony Network (CSOACN). Our new approach combines the SVM method with CSOACNs to take the advantages of both while avoiding their weaknesses. Our algorithm is implemented and evaluated using a standard benchmark KDD99 data set. Experiments show that CSVAC (Combining Support Vectors with Ant Colony) outperforms SVM alone or CSOACN alone in terms of both classification rate and run-time efficiency.
A two-level hybrid approach for intrusion detection. To exploit the strengths of misuse detection and anomaly detection, an intensive focus on intrusion detection combines the two. From a novel perspective, in this paper, we proposed a hybrid approach toward achieving a high detection rate with a low false positive rate. The approach is a two-level hybrid solution consisting of two anomaly detection components and a misuse detection component. In stage 1, an anomaly detection method with low computing complexity is developed and employed to build the detection component. The k-nearest neighbors algorithm becomes crucial in building the two detection components for stage 2. In this hybrid approach, all of the detection components are well-coordinated. The detection component of stage 1 becomes involved in the course of building the two detection components of stage 2 that reduce the false positives and false negatives generated by the detection component of stage 1. Experimental results on the KDD'99 dataset and the Kyoto University Benchmark dataset confirm that the proposed hybrid approach can effectively detect network anomalies with a low false positive rate. HighlightsA novel two-level hybrid intrusion detection approach is proposed.A novel anomaly detection method based on change of cluster centres is proposed.Detection components in the two stages of the hybrid approach work well together.Experimental results show that our approach performs well in false positive rate.
A Blockchain Based Truthful Incentive Mechanism for Distributed P2P Applications. In distributed peer-to-peer (P2P) applications, peers self-organize and cooperate to effectively complete certain tasks such as forwarding files, delivering messages, or uploading data. Nevertheless, users are selfish in nature and they may refuse to cooperate due to their concerns on energy and bandwidth consumption. Thus each user should receive a satisfying reward to compensate its resource consumption for cooperation. However, suitable incentive mechanisms that can meet the diverse requirements of users in dynamic and distributed P2P environments are still missing. On the other hand, we observe that Blockchain is a decentralized secure digital ledger of economic transactions that can be programmed to record not just financial transactions and Blockchain-based cryptocurrencies get more and more market capitalization. Therefore in this paper, we propose a Blockchain based truthful incentive mechanism for distributed P2P applications that applies a cryptocurrency such as Bitcoin to incentivize users for cooperation. In this mechanism, users who help with a successful delivery get rewarded. As users and miners in the Blockchain P2P system may exhibit selfish actions or collude with each other, we propose a secure validation method and a pricing strategy, and integrate them into our incentive mechanism. Through a game theoretical analysis and evaluation study, we demonstrate the effectiveness and security strength of our proposed incentive mechanism.
A Closer Look at Intrusion Detection System for Web Applications Intrusion Detection System (IDS) acts as a defensive tool to detect the security attacks on the web. IDS is a known methodology for detecting network-based attacks but is still immature in monitoring and identifying web-based application attacks. The objective of this research paper is to present a design methodology for efficient IDS with respect to web applications. In this paper, we present several specific aspects which make it challenging for an IDS to monitor and detect web attacks. The article also provides a comprehensive overview of the existing detection systems exclusively designed to observe web traffic. Furthermore, we identify various dimensions for comparing the IDS from different perspectives based on their design and functionalities. We also propose a conceptual framework of a web IDS with a prevention mechanism to offer systematic guidance for the implementation of the system. We compare its features with five existing detection systems, namely, AppSensor, PHPIDS, ModSecurity, Shadow Daemon, and AQTRONIX Web Knight. This paper will highly facilitate the interest groups with the cutting-edge information to understand the stronger and weaker sections of the domain and provide a firm foundation for developing an intelligent and efficient system.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Adam: A Method for Stochastic Optimization. We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
Blockchain Meets IoT: An Architecture for Scalable Access Management in IoT. The Internet of Things (IoT) is stepping out of its infancy into full maturity and establishing itself as a part of the future Internet. One of the technical challenges of having billions of devices deployed worldwide is the ability to manage them. Although access management technologies exist in IoT, they are based on centralized models which introduce a new variety of technical limitations to ma...
Multivariate Short-Term Traffic Flow Forecasting Using Time-Series Analysis Existing time-series models that are used for short-term traffic condition forecasting are mostly univariate in nature. Generally, the extension of existing univariate time-series models to a multivariate regime involves huge computational complexities. A different class of time-series models called structural time-series model (STM) (in its multivariate form) has been introduced in this paper to develop a parsimonious and computationally simple multivariate short-term traffic condition forecasting algorithm. The different components of a time-series data set such as trend, seasonal, cyclical, and calendar variations can separately be modeled in STM methodology. A case study at the Dublin, Ireland, city center with serious traffic congestion is performed to illustrate the forecasting strategy. The results indicate that the proposed forecasting algorithm is an effective approach in predicting real-time traffic flow at multiple junctions within an urban transport network.
State resetting for bumpless switching in supervisory control In this paper the realization and implementation of a multi-controller scheme made of a finite set of linear single-input-single-output controllers, possibly having different state dimensions, is studied. The supervisory control framework is considered, namely a minimal parameter dependent realization of the set of controllers such that all controllers share the same state space is used. A specific state resetting strategy based on the behavioral approach to system theory is developed in order to master the transient upon controller switching.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Finite-Time Adaptive Fuzzy Tracking Control Design for Nonlinear Systems. This paper addresses the finite-time tracking problem of nonlinear pure-feedback systems. Unlike the literature on traditional finite-time stabilization, in this paper the nonlinear system functions, including the bounding functions, are all totally unknown. Fuzzy logic systems are used to model those unknown functions. To present a finite-time control strategy, a criterion of semiglobal practical...
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
A General Equilibrium Model for Industries with Price and Service Competition This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers' prices and all retailers' service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty.We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms' service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game.We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service.
Competition in Service Industries We analyze a general market for an industry of competing service facilities. Firms differentiate themselves by their price levels and the waiting time their customers experience, as well as different attributes not determined directly through competition. Our model therefore assumes that the expected demand experienced by a given firm may depend on all of the industry's price levels as well as a (steady-state) waiting-time standard, which each of the firms announces and commits itself to by proper adjustment of its capacity level. We focus primarily on a separable specification, which in addition is linear in the prices. (Alternative nonseparable or nonlinear specifications are discussed in the concluding section.) We define a firm's service level as the difference between an upper-bound benchmark for the waiting-time standard (w脤聟) and the firm's actual waiting-time standard. Different types of competition and the resulting equilibrium behavior may arise, depending on the industry dynamics through which the firms select their strategic choices. In one case, firms may initially select their waiting-time standards, followed by a selection of their prices in a second stage (service-level first). Alternatively, the sequence of strategic choices may be reversed (price first) or, as a third alternative, the firms may make their choices simultaneously (simultaneous competition). We model each of the service facilities as a single-server M/M/1 queueing facility, which receives a given firm-specific price for each customer served. Each firm incurs a given cost per customer served as well as cost per unit of time proportional to its adopted capacity level.
Computational difficulties of bilevel linear programming We show, using small examples, that two algorithms previously published for the Bilevel Linear Programming problem BLP may fail to find the optimal solution and thus must be considered to be heuris...
Electric Vehicle Charging Stations With Renewable Power Generators: A Game Theoretical Analysis In this paper, we study the price competition among electric vehicle charging stations (EVCSs) with renewable power generators (RPGs). As electric vehicles (EVs) become more popular, a competition among EVCSs to attract EVs is inevitable. Thereby, each EVCS sets its electricity price to maximize its revenue by taking into account the competition with neighboring EVCSs. We analyze the competitive interactions between EVCSs using game theory, where relevant physical constraints such as the transmission line capacity, the distance between EV and EVCS, and the number of charging outlets at the EVCSs are taken into account. We show that the game played by EVCSs is a supermodular game and there exists a unique pure Nash equilibrium for best response algorithms with arbitrary initial policy. The electricity price and the revenue of EVCSs are evaluated via simulations, which reveal the benefits of having RPGs at the EVCSs.
Structure Learning in Power Distribution Networks. Traditional power distribution networks suffer from a lack of real-time observability. This complicates development and implementation of new smart-grid technologies, such as those related to demand response, outage detection and management, and improved load monitoring. In this paper, inspired by proliferation of metering technology, we discuss topology estimation problems in structurally loopy b...
Coordinated Planning of Extreme Fast Charging Stations and Power Distribution Networks Considering On-Site Storage The extreme fast charging (XFC) technology helps to reduce refueling time, alleviate mile anxiety, extend driving range and finally promote the popularity of electric vehicles (EVs). However, it would also pose great challenges on the power grid infrastructure especially distribution networks, due to the large-scale and intermittent power demand. This paper proposes a coordinated planning method for power distribution networks and XFC EV charging stations, with the on-site batteries considered. Firstly, considering the traffic flow pattern, the operation of XFC stations is analyzed on both energy and power demand. Secondly, the coordinated planning model is developed to satisfy the time-varying XFC load, with both transportation and electricity constraints considered. In addition, the on-site batteries are introduced to flatten the XFC energy used and supplement its power supply. The case studies have verified the effectiveness of the proposed method. The influence of XFC on the distribution networks and the effects of the on-site storage are also studied.
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
Very Deep Convolutional Networks for Large-Scale Image Recognition. In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
Chimp optimization algorithm. •A novel optimizer called Chimp Optimization Algorithm (ChOA) is proposed.•ChOA is inspired by individual intelligence and sexual motivation of chimps.•ChOA alleviates the problems of slow convergence rate and trapping in local optima.•The four main steps of Chimp hunting are implemented.
A bayesian network approach to traffic flow forecasting A new approach based on Bayesian networks for traffic flow forecasting is proposed. In this paper, traffic flows among adjacent road links in a transportation network are modeled as a Bayesian network. The joint probability distribution between the cause nodes (data utilized for forecasting) and the effect node (data to be forecasted) in a constructed Bayesian network is described as a Gaussian mixture model (GMM) whose parameters are estimated via the competitive expectation maximization (CEM) algorithm. Finally, traffic flow forecasting is performed under the criterion of minimum mean square error (mmse). The approach departs from many existing traffic flow forecasting models in that it explicitly includes information from adjacent road links to analyze the trends of the current link statistically. Furthermore, it also encompasses the issue of traffic flow forecasting when incomplete data exist. Comprehensive experiments on urban vehicular traffic flow data of Beijing and comparisons with several other methods show that the Bayesian network is a very promising and effective approach for traffic flow modeling and forecasting, both for complete data and incomplete data
A novel full structure optimization algorithm for radial basis probabilistic neural networks. In this paper, a novel full structure optimization algorithm for radial basis probabilistic neural networks (RBPNN) is proposed. Firstly, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to heuristically select the initial hidden layer centers of the RBPNN, and then the recursive orthogonal least square (ROLS) algorithm combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. Finally, the effectiveness and efficiency of our proposed algorithm are evaluated through a plant species identification task involving 50 plant species.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.105262
0.105262
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
0
An Adaptive on-Demand Multipath Routing Protocol With QoS Support for High-Speed MANET. The mobility and resource limitation of nodes are the critical factors that affect the performance of Mobile AD hoc network (MANET). The mobility of nodes will affect the stability of links, and the limitation of node resources will lead to congestion, so it is very difficult to design a routing protocol that supports quality of service (QoS) in MANET. Especially in the scenario of high-speed node movement, frequent link interruption will damages QoS performance, so it is necessary to design MANET routing protocol that can adapt to network topology changes to support QoS. In this paper, we propose a Topological change Adaptive Ad hoc On-demand Multipath Distance Vector (TA-AOMDV) routing protocol, which can adapt to high-speed node movement to support QoS. In this protocol, a stable path selection algorithm is designed, which not only takes node resources (residual energy, available bandwidth and queue length) as the path selection parameters, but also considers the link stability probability between nodes. Furthermore, in order to adapt to the rapid change of topology, link interrupt prediction mechanism is integrated into the protocol, which updates the routing strategy based on periodic probabilistic estimates of link stability. Different scenarios with node speed in the range of 10-50m/s, data rate in the range of 4-40kbps and number of nodes in the range of 10-100 are simulated on NS2 platform. Our results show that the QoS metrics (packet delivery rate, end-to-end delay, and throughput) of the proposed protocol are significantly improved when the node speed is higher than 30m/s although it is slightly better when the node speed is lower than 30m/s. Our on-demand multipath routing protocol demonstrates high potential to support QoS for high-speed MANET.
Multiple QoS Parameters-Based Routing for Civil Aeronautical Ad Hoc Networks. Aeronautical ad hoc network (AANET) can be applied as in-flight communication systems to allow aircraft to communicate with the ground, in complement to other existing communication systems to support Internet of Things. However, the unique features of civil AANETs present a great challenge to provide efficient and reliable data delivery in such environments. In this paper, we propose a multiple q...
Performance Improvement of Cluster-Based Routing Protocol in VANET. Vehicular ad-hoc NETworks (VANETs) have received considerable attention in recent years, due to its unique characteristics, which are different from mobile ad-hoc NETworks, such as rapid topology change, frequent link failure, and high vehicle mobility. The main drawback of VANETs network is the network instability, which yields to reduce the network effciency. In this paper, we propose three algorithms: cluster-based life-time routing (CBLTR) protocol, Intersection dynamic VANET routing (IDVR) protocol, and control overhead reduction algorithm (CORA). The CBLTR protocol aims to increase the route stability and average throughput in a bidirectional segment scenario. The cluster heads (CHs) are selected based on maximum lifetime among all vehicles that are located within each cluster. The IDVR protocol aims to increase the route stability and average throughput, and to reduce end-to-end delay in a grid topology. The elected intersection CH receives a set of candidate shortest routes (SCSR) closed to the desired destination from the software defined network. The IDVR protocol selects the optimal route based on its current location, destination location, and the maximum of the minimum average throughput of SCSR. Finally, the CORA algorithm aims to reduce the control overhead messages in the clusters by developing a new mechanism to calculate the optimal numbers of the control overhead messages between the cluster members and the CH. We used SUMO traffic generator simulators and MATLAB to evaluate the performance of our proposed protocols. These protocols significantly outperform many protocols mentioned in the literature, in terms of many parameters.
SCOTRES: Secure Routing for IoT and CPS. Wireless ad-hoc networks are becoming popular due to the emergence of the Internet of Things and cyber-physical systems (CPSs). Due to the open wireless medium, secure routing functionality becomes important. However, the current solutions focus on a constrain set of network vulnerabilities and do not provide protection against newer attacks. In this paper, we propose SCOTRES-a trust-based system ...
AQ-Routing: mobility-, stability-aware adaptive routing protocol for data routing in MANET–IoT systems Internet of Things, is an innovative technology which allows the connection of physical things with the digital world through the use of heterogeneous networks and communication technologies. In an IoT system, a major role is played by the wireless sensor network as its components comprise: sensing, data acquiring, heterogeneous connectivity and data processing. Mobile ad-hoc networks are highly self reconfiguring networks of mobile nodes which communicate through wireless links. In such a network, each node acts both as a router and host at the same time. The interaction between MANETs and Internet of Things opens new ways for service provision in smart environments and challenging issues in its networking aspects. One of the main issues in MANET–IoT systems is the mobility of the network nodes: routing protocol must react effectively to the topological changes into the algorithm design. We describe the design and implementation of AQ-Routing, and analyze its performance using both simulations and measurements based on our implementation. In general, the networking of such a system is very challenging regarding routing aspects. Also, it is related to system mobility and limited network sensor resources. This article builds upon this observation an adaptive routing protocol (AQ-Routing) based on Reinforcement Learning (RL) techniques, which has the ability to detect the level of mobility at different points of time so that each individual node can update routing metric accordingly. The proposed protocol introduces: (i) new model, developed via Q-learning technique, to detect the level of mobility at each node in the network; (ii) a new metric, called $$Q_{\textit{metric}},$$ which account for the static and dynamic routing metrics, and which are combined and updated to the changing network topologies. The protocol can efficiently handle network mobility by a way of preemptively adapting its behaviour thanks to the mobility detection model. The presented results of simulation provide an effective approach to improve the stability of links in both static and mobile scenario and, hence, increase the packet delivery ratio in the global MANET–IoT system.
A Network Lifetime Extension-Aware Cooperative MAC Protocol for MANETs With Optimized Power Control. In this paper, a cooperative medium access control (CMAC) protocol, termed network lifetime extension-aware CMAC (LEA-CMAC) for mobile ad-hoc networks (MANETs) is proposed. The main feature of the LEA-CMAC protocol is to enhance the network performance through the cooperative transmission to achieve a multi-objective target orientation. The unpredictable nature of wireless communication links results in the degradation of network performance in terms of throughput, end-to-end delay, energy efficiency, and network lifetime of MANETs. Through cooperative transmission, the network performance of MANETs can be improved, provided a beneficial cooperation is satisfied and design parameters are carefully selected at the MAC layer. To achieve a multi-objective target-oriented CMAC protocol, we formulated an optimization problem to extend the network lifetime of MANETs. The optimization solution led to the investigation of symmetric and asymmetric transmit power policies. We then proposed a distributed relay selection process to select the best retransmitting node among the qualified relays, with consideration on a transmit power, a sufficient residual energy after cooperation, and a high cooperative gain. The simulation results show that the LEA-CMAC protocol can achieve a multi-objective target orientation by exploiting an asymmetric transmit power policy to improve the network performance.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems Recently, wireless technologies have been growing actively all around the world. In the context of wireless technology, fifth-generation (5G) technology has become a most challenging and interesting topic in wireless research. This article provides an overview of the Internet of Things (IoT) in 5G wireless systems. IoT in the 5G system will be a game changer in the future generation. It will open a door for new wireless architecture and smart services. Recent cellular network LTE (4G) will not be sufficient and efficient to meet the demands of multiple device connectivity and high data rate, more bandwidth, low-latency quality of service (QoS), and low interference. To address these challenges, we consider 5G as the most promising technology. We provide a detailed overview of challenges and vision of various communication industries in 5G IoT systems. The different layers in 5G IoT systems are discussed in detail. This article provides a comprehensive review on emerging and enabling technologies related to the 5G system that enables IoT. We consider the technology drivers for 5G wireless technology, such as 5G new radio (NR), multiple-input–multiple-output antenna with the beamformation technology, mm-wave commutation technology, heterogeneous networks (HetNets), the role of augmented reality (AR) in IoT, which are discussed in detail. We also provide a review on low-power wide-area networks (LPWANs), security challenges, and its control measure in the 5G IoT scenario. This article introduces the role of AR in the 5G IoT scenario. This article also discusses the research gaps and future directions. The focus is also on application areas of IoT in 5G systems. We, therefore, outline some of the important research directions in 5G IoT.
A communication robot in a shopping mall This paper reports our development of a communication robot for use in a shopping mall to provide shopping information, offer route guidance, and build rapport. In the development, the major difficulties included sensing human behaviors, conversation in a noisy daily environment, and the needs of unexpected miscellaneous knowledge in the conversation. We chose a networkrobot system approach, where a single robot's poor sensing capability and knowledge are supplemented by ubiquitous sensors and a human operator. The developed robot system detects a person with floor sensors to initiate interaction, identifies individuals with radio-frequency identification (RFID) tags, gives shopping information while chatting, and provides route guidance with deictic gestures. The robotwas partially teleoperated to avoid the difficulty of speech recognition as well as to furnish a new kind of knowledge that only humans can flexibly provide. The information supplied by a human operator was later used to increase the robot's autonomy. For 25 days in a shopping mall, we conducted a field trial and gathered 2642 interactions. A total of 235 participants signed up to use RFID tags and, later, provided questionnaire responses. The questionnaire results are promising in terms of the visitors' perceived acceptability as well as the encouragement of their shopping activities. The results of the teleoperation analysis revealed that the amount of teleoperation gradually decreased, which is also promising.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Adaptive dynamic programming and optimal control of nonlinear nonaffine systems. In this paper, a novel optimal control design scheme is proposed for continuous-time nonaffine nonlinear dynamic systems with unknown dynamics by adaptive dynamic programming (ADP). The proposed methodology iteratively updates the control policy online by using the state and input information without identifying the system dynamics. An ADP algorithm is developed, and can be applied to a general class of nonlinear control design problems. The convergence analysis for the designed control scheme is presented, along with rigorous stability analysis for the closed-loop system. The effectiveness of this new algorithm is illustrated by two simulation examples.
Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear Systems. In this paper, an adaptive fuzzy control method with prescribed performance is proposed for multi-input and multioutput block-triangular-structured nonlinear systems with immeasurable states. Fuzzy logic systems are adopted to identify the unknown nonlinear system functions. Adaptive fuzzy state observers are designed to solve the problem of unmeasured states, and a new observer-based output-feedb...
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
A Design Framework of Nonlinear H<sub>∞</sub> PD Observer for One-Sided Lipschitz Singular Systems With Disturbances This brief addresses the nonlinear <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">H</i> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub> proportional derivative (PD) observer design problem of one-sided Lipschitz (OSL) singular systems with disturbances. In general, the studied nonlinearity can be regarded as an extension of the traditional Lipschitz restriction and has inherent merits with respect to conservativeness. A novel sufficient condition for the existence of the present observer is given by integrating the concept of quadratic inner-bounded, free-weighting matrix method and OSL condition. For the purpose of observer synthesis, the achieved condition is further converted into the form of linear matrix inequalities (LMIs), by which the gains of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">H</i> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub> PD observer can be solved simultaneously. Finally, the performance of theoretical results is shown through an application example of DC motor.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Mobile Unmanned Aerial Vehicles (UAVs) for Energy-Efficient Internet of Things Communications. In this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (UAVs), used as aerial base stations to collect data from ground Internet of Things (IoT) devices, are investigated. In particular, to enable reliable uplink communications for the IoT devices with a minimum total transmit power, a novel framework is proposed for jointly optimizing the 3D placement and the mo...
Trajectory Design and Power Control for Multi-UAV Assisted Wireless Networks: A Machine Learning Approach. A novel framework is proposed for the trajectory design of multiple unmanned aerial vehicles (UAVs) based on the prediction of usersu0027 mobility information. The problem of joint trajectory design and power control is formulated for maximizing the instantaneous sum transmit rate while satisfying the rate requirement of users. In an effort to solve this pertinent problem, a three-step approach is proposed which is based on machine learning techniques to obtain both the position information of users and the trajectory design of UAVs. Firstly, a multi-agent Q-learning based placement algorithm is proposed for determining the optimal positions of the UAVs based on the initial location of the users. Secondly, in an effort to determine the mobility information of users based on a real dataset, their position data is collected from Twitter to describe the anonymous user-trajectories in the physical world. In the meantime, an echo state network (ESN) based prediction algorithm is proposed for predicting the future positions of users based on the real dataset. Thirdly, a multi-agent Q-learning based algorithm is conceived for predicting the position of UAVs in each time slot based on the movement of users. The algorithm is proved to be able to converge to an optimal state. In this algorithm, multiple UAVs act as agents to find optimal actions by interacting with their environment and learn from their mistakes. Numerical results are provided to demonstrate that as the size of the reservoir increases, the proposed ESN approach improves the prediction accuracy. Finally, we demonstrate that throughput gains of about $17%$ are achieved.
Deep Reinforcement Learning for User Association and Resource Allocation in Heterogeneous Cellular Networks. Heterogeneous cellular networks can offload the mobile traffic and reduce the deployment costs, which have been considered to be a promising technique in the next-generation wireless network. Due to the non-convex and combinatorial characteristics, it is challenging to obtain an optimal strategy for the joint user association and resource allocation issue. In this paper, a reinforcement learning (...
Unmanned Aerial Vehicle-Aided Communications: Joint Transmit Power and Trajectory Optimization. This letter investigates the transmit power and trajectory optimization problem for unmanned aerial vehicle (UAV)-aided networks. Different from majority of the existing studies with fixed communication infrastructure, a dynamic scenario is considered where a flying UAV provides wireless services for multiple ground nodes simultaneously. To fully exploit the controllable channel variations provide...
Reconfigurable Antennas: Design and Applications The advancement in wireless communications requires the integration of multiple radios into a single platform to maximize connectivity. In this paper, the design process of reconfigurable antennas is discussed. Reconfigurable antennas are proposed to cover different wireless services that operate over a wide frequency range. They show significant promise in addressing new system requirements. They exhibit the ability to modify their geometries and behavior to adapt to changes in surrounding conditions. Reconfigurable antennas can deliver the same throughput as a multiantenna system. They use dynamically variable and adaptable single-antenna geometry without increasing the real estate required to accommodate multiple antennas. The optimization of reconfigurable antenna design and operation by removing unnecessary redundant switches to alleviate biasing issues and improve the system's performance is discussed. Controlling the antenna reconfiguration by software, using Field Programmable Gate Arrays (FPGAs) or microcontrollers is introduced herein. The use of Neural Networks and its integration with graph models on programmable platforms and its effect on the operation of reconfigurable antennas is presented. Finally, the applications of reconfigurable antennas for cognitive radio, Multiple Input Multiple Output (MIMO) channels, and space applications are highlighted.
Characterizing Radio Wave Propagation in Urban Street Canyon With Vehicular Blockage at 28 GHz The communications between two driving vehicles along a narrow street may be limited by the presence of a third vehicle blocking the transmission. In this work, we investigate radio wave propagation at 28 GHz in an urban street canyon scenario by conducting channel measurements, where the vehicle(s) occlude(s) the line-of-sight path. We quantify the impact of the car blockage and study the alternative propagation paths, which can be used for establishing a data link. Based on the obtained results, we report that besides the low-loss (3.4 dB) reflection from the wall, a radio link through the blocking car may potentially be established for data sharing. Specifically, the attenuation through clear windows is 2 dB, while the attenuation caused by sun protective film is 15 dB. Diffraction over the car and propagation in foliage reduce the multipath power drastically by 21–24 dB and 16–19 dB, respectively, and cannot be associated with reliable links. Finally, measurement results were compared with the ray-based simulation data, which demonstrate agreement to within ± 4.3 dB of measured losses.
Energy-Efficient Data Collection in UAV Enabled Wireless Sensor Network. In wireless sensor networks, utilizing the unmanned aerial vehicle (UAV) as a mobile data collector for the sensor nodes (SNs) is an energy-efficient technique to prolong the network lifetime. In this letter, considering a general fading channel model for the SN-UAV links, we jointly optimize the SNs&#39; wake-up schedule and UAV&#39;s trajectory to minimize the maximum energy consumption of all SNs, whil...
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices. Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Cell-Free Massive MIMO versus Small Cells. A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs), which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max–min power control algorithms. Max–min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard. Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly fivefold improvement in 95%-likely per-user throughput over the small-cell scheme, and tenfold improvement when shadow fading is correlated.
Energy Efficiency Resource Allocation For D2d Communication Network Based On Relay Selection In order to solve the problem of spectrum resource shortage and energy consumption, we put forward a new model that combines with D2D communication and energy harvesting technology: energy harvesting-aided D2D communication network under the cognitive radio (EHA-CRD), where the D2D users harvest energy from the base station and the D2D source communicate with D2D destination by D2D relays. Our goals are to investigate the maximization energy efficiency (EE) of the network by joint time allocation and relay selection while taking into the constraints of the signal-to-noise ratio of D2D and the rates of the Cellular users. During this process, the energy collection time and communication time are randomly allocated. The maximization problem of EE can be divided into two sub-problems: (1) relay selection problem; (2) time optimization problem. For the first sub-problem, we propose a weighted sum maximum algorithm to select the best relay. For the last sub-problem, the EE maximization problem is non-convex problem with time. Thus, by using fractional programming theory, we transform it into a standard convex optimization problem, and we propose the optimization iterative algorithm to solve the convex optimization problem for obtaining the optimal solution. And, the simulation results show that the proposed relay selection algorithm and time optimization algorithm are significantly improved compared with the existing algorithms.
A multiple traveling salesman problem model for hot rolling scheduling in Shanghai Baoshan Iron & Steel Complex This paper presents the model, solution method, and system developed and implemented for hot rolling production scheduling. The project is part of a large-scale effort to upgrade production and operations management systems of major iron and steel companies in China. Hot rolling production involves sequence dependent setup costs. Traditionally the production is scheduled using a greedy serial method and the setup cost is very high. In this study we propose a parallel strategy to model the scheduling problem and solve it using a new modified genetic algorithm (MGA). Combing the model and man–machine interactive method, a scheduling system is developed. The result of one year’s running in Shanghai Baoshan Iron & Steel Complex shows 20% improvement over the previous manual based system. As the company is one of the largest steel companies and the most modernized one in China, the successful application of the scheduling system in this company sets an example for other steel companies which have more potentials for improvement.
Improved Training of Wasserstein GANs. Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.
Bon Appetit! Robot Persuasion for Food Recommendation. The integration of social robots within service industries requires social robots to be persuasive. We conducted a vignette experiment to investigate the persuasiveness of a human, robot, and an information kiosk when offering consumers a restaurant recommendation. We found that embodiment type significantly affects the persuasiveness of the agent, but only when using a specific recommendation sentence. These preliminary results suggest that human-like features of an agent may serve to boost persuasion in recommendation systems. However, the extent of the effect is determined by the nature of the given recommendation.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.024548
0.026667
0.026667
0.025318
0.022222
0.022222
0.012661
0.003806
0.000025
0
0
0
0
0
Unmanned Aerial Vehicle Base Station (UAV-BS) Deployment With Millimeter-Wave Beamforming. Unmanned aerial vehicle (UAV) with flexible mobility and low cost has been a promising technology for wireless communication. Thus, it can be used for wireless data collection in Internet of Things (IoT). In this article, we consider millimeter-wave (mmWave) communication on a UAV platform, where the UAV base station (UAV-BS) serves multiple ground users, which generate big sensor data. Both the d...
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
A dynamic N threshold prolong lifetime method for wireless sensor nodes. Ubiquitous computing is a technology to assist many computers available around the physical environment at any place and anytime. This service tends to be invisible from users in everyday life. Ubiquitous computing uses sensors extensively to provide important information such that applications can adjust their behavior. A Wireless Sensor Network (WSN) has been applied to implement such an architecture. To ensure continuous service, a dynamic N threshold power saving method for WSN is developed. A threshold N has been derived to obtain minimum power consumption for the sensor node while considering each different data arrival rate. We proposed a theoretical analysis regarding the probability variation for each state considering different arrival rate, service rate and collision probability. Several experiments have been conducted to demonstrate the effectiveness of our research. Our method can be applied to prolong the service time of a ubiquitous computing network to cope with the network disconnection issue.
Fuzzy Mathematical Programming and Self-Adaptive Artificial Fish Swarm Algorithm for Just-in-Time Energy-Aware Flow Shop Scheduling Problem With Outsourcing Option Flow shop scheduling (FSS) problem constitutes a major part of production planning in every manufacturing organization. It aims at determining the optimal sequence of processing jobs on available machines within a given customer order. In this article, a novel biobjective mixed-integer linear programming (MILP) model is proposed for FSS with an outsourcing option and just-in-time delivery in order to simultaneously minimize the total cost of the production system and total energy consumption. Each job is considered to be either scheduled in-house or to be outsourced to one of the possible subcontractors. To efficiently solve the problem, a hybrid technique is proposed based on an interactive fuzzy solution technique and a self-adaptive artificial fish swarm algorithm (SAAFSA). The proposed model is treated as a single objective MILP using a multiobjective fuzzy mathematical programming technique based on the ε-constraint, and SAAFSA is then applied to provide Pareto optimal solutions. The obtained results demonstrate the usefulness of the suggested methodology and high efficiency of the algorithm in comparison with CPLEX solver in different problem instances. Finally, a sensitivity analysis is implemented on the main parameters to study the behavior of the objectives according to the real-world conditions.
Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks Federated learning (FL) is a promising solution to privacy preservation for data-driven deep learning approaches. However, enabling FL in unmanned aerial vehicle (UAV)-assisted wireless networks is still challenging due to limited resources and battery capacity in the UAV and user devices. In this regard, we propose a deep reinforcement learning (DRL)-based framework for joint UAV placement and re...
Energy-Efficient Optimization for Wireless Information and Power Transfer in Large-Scale MIMO Systems Employing Energy Beamforming In this letter, we consider a large-scale multiple-input multiple-output (MIMO) system where the receiver should harvest energy from the transmitter by wireless power transfer to support its wireless information transmission. The energy beamforming in the large-scale MIMO system is utilized to address the challenging problem of long-distance wireless power transfer. Furthermore, considering the limitation of the power in such a system, this letter focuses on the maximization of the energy efficiency of information transmission (bit per Joule) while satisfying the quality-of-service (QoS) requirement, i.e. delay constraint, by jointly optimizing transfer duration and transmit power. By solving the optimization problem, we derive an energy-efficient resource allocation scheme. Numerical results validate the effectiveness of the proposed scheme.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
DADNet: Dilated-Attention-Deformable ConvNet for Crowd Counting Most existing CNN-based methods for crowd counting always suffer from large scale variation in objects of interest, leading to density maps of low quality. In this paper, we propose a novel deep model called Dilated-Attention-Deformable ConvNet (DADNet), which consists of two schemes: multi-scale dilated attention and deformable convolutional DME (Density Map Estimation). The proposed model explores a scale-aware attention fusion with various dilation rates to capture different visual granularities of crowd regions of interest, and utilizes deformable convolutions to generate a high-quality density map. There are two merits as follows: (1) varying dilation rates can effectively identify discriminative regions by enlarging the receptive fields of convolutional kernels upon surrounding region cues, and (2) deformable CNN operations promote the accuracy of object localization in the density map by augmenting the spatial object location sampling with adaptive offsets and scalars. DADNet not only excels at capturing rich spatial context of salient and tiny regions of interest simultaneously, but also keeps a robustness to background noises, such as partially occluded objects. Extensive experiments on benchmark datasets verify that DADNet achieves the state-of-the-art performance. Visualization results of the multi-scale attention maps further validate the remarkable interpretability achieved by our solution.
Movie2Comics: Towards a Lively Video Content Presentation a type of artwork, comics is prevalent and popular around the world. However, despite the availability of assistive software and tools, the creation of comics is still a labor-intensive and time-consuming process. This paper proposes a scheme that is able to automatically turn a movie clip to comics. Two principles are followed in the scheme: 1) optimizing the information preservation of the movie; and 2) generating outputs following the rules and the styles of comics. The scheme mainly contains three components: script-face mapping, descriptive picture extraction, and cartoonization. The script-face mapping utilizes face tracking and recognition techniques to accomplish the mapping between characters' faces and their scripts. The descriptive picture extraction then generates a sequence of frames for presentation. Finally, the cartoonization is accomplished via three steps: panel scaling, stylization, and comics layout design. Experiments are conducted on a set of movie clips and the results have demonstrated the usefulness and the effectiveness of the scheme.
View-Based Discriminative Probabilistic Modeling for 3D Object Retrieval and Recognition In view-based 3D object retrieval and recognition, each object is described by multiple views. A central problem is how to estimate the distance between two objects. Most conventional methods integrate the distances of view pairs across two objects as an estimation of their distance. In this paper, we propose a discriminative probabilistic object modeling approach. It builds probabilistic models for each object based on the distribution of its views, and the distance between two objects is defined as the upper bound of the Kullback–Leibler divergence of the corresponding probabilistic models. 3D object retrieval and recognition is accomplished based on the distance measures. We first learn models for each object by the adaptation from a set of global models with a maximum likelihood principle. A further adaption step is then performed to enhance the discriminative ability of the models. We conduct experiments on the ETH 3D object dataset, the National Taiwan University 3D model dataset, and the Princeton Shape Benchmark. We compare our approach with different methods, and experimental results demonstrate the superiority of our approach.
Doa-Gan: Dual-Order Attentive Generative Adversarial Network For Image Copy-Move Forgery Detection And Localization Images can be manipulated for nefarious purposes to hide content or to duplicate certain objects through copy-move operations. Discovering a well-crafted copy-move forgery in images can be very challenging for both humans and machines; for example, an object on a uniform background can be replaced by an image patch of the same background. In this paper, we propose a Generative Adversarial Network with a dual-order attention model to detect and localize copy-move forgeries. In the generator, the first-order attention is designed to capture copy-move location information, and the second-order attention exploits more discriminative features for the patch co-occurrence. Both attention maps are extracted from the affinity matrix and are used to fuse location-aware and co-occurrence features for the final detection and localization branches of the network. The discriminator network is designed to further ensure more accurate localization results. To the best of our knowledge, we are the first to propose such a network architecture with the 1st-order attention mechanism from the affinity matrix. We have performed extensive experimental validation and our state-of-the-art results strongly demonstrate the efficacy of the proposed approach.
Low-Rank Autoregressive Tensor Completion for Spatiotemporal Traffic Data Imputation Spatiotemporal traffic time series (e.g., traffic volume/speed) collected from sensing systems are often incomplete with considerable corruption and large amounts of missing values, preventing users from harnessing the full power of the data. Missing data imputation has been a long-standing research topic and critical application for real-world intelligent transportation systems. A widely applied imputation method is low-rank matrix/tensor completion; however, the low-rank assumption only preserves the global structure while ignores the strong local consistency in spatiotemporal data. In this paper, we propose a low-rank autoregressive tensor completion (LATC) framework by introducing temporal variation as a new regularization term into the completion of a third-order (sensor x time of day x day) tensor. The third-order tensor structure allows us to better capture the global consistency of traffic data, such as the inherent seasonality and day-to-day similarity. To achieve local consistency, we design the temporal variation by imposing an autoregressive model for each time series with coefficients as learnable parameters. Different from previous spatial and temporal regularization schemes, the minimization of temporal variation can better characterize temporal generative mechanisms beyond local smoothness, allowing us to deal with more challenging scenarios such as ``blackout'' missing. To solve the optimization problem in LATC, we introduce an alternating minimization scheme that estimates the low-rank tensor and autoregressive coefficients iteratively. We conduct extensive numerical experiments on several real-world traffic data sets, and our results demonstrate the effectiveness of LATC in diverse missing scenarios.
A deep learning approach to patch-based image inpainting forensics. Although image inpainting is now an effective image editing technique, limited work has been done for inpainting forensics. The main drawbacks of the conventional inpainting forensics methods lie in the difficulties on inpainting feature extraction and the very high computational cost. In this paper, we propose a novel approach based on a convolutional neural network (CNN) to detect patch-based inpainting operation. Specifically, the CNN is built following the encoder–decoder network structure, which allows us to predict the inpainting probability for each pixel in an image. To guide the CNN to automatically learn the inpainting features, a label matrix is generated for the CNN training by assigning a class label for each pixel of an image, and the designed weighted cross-entropy serves as the loss function. They further help to strongly supervise the CNN to capture the manipulation information rather than the image content features. By the established CNN, inpainting forensics does not need to consider feature extraction and classifier design, and use any postprocessing as in conventional forensics methods. They are combined into the unique framework and optimized simultaneously. Experimental results show that the proposed method achieves superior performance in terms of true positive rate, false positive rate and the running time, as compared with state-of-the-art methods for inpainting forensics, and is very robust against JPEG compression and scaling manipulations.
Rich Models for Steganalysis of Digital Images We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary $\\pm {\\hbox{1}}$ embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.
A survey on sensor networks The advancement in wireless communications and electronics has enabled the development of low-cost sensor networks. The sensor networks can be used for various application areas (e.g., health, military, home). For different application areas, there are different technical issues that researchers are currently resolving. The current state of the art of sensor networks is captured in this article, where solutions are discussed under their related protocol stack layer sections. This article also points out the open research issues and intends to spark new interests and developments in this field.
Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider a MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources􀀀the transmit precoding matrices of the MUs􀀀and the computational resources􀀀the CPU cycles/second assigned by the cloud to each MU􀀀in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.
Symbolic model checking for real-time systems We describe finite-state programs over real-numbered time in a guarded-command language with real-valued clocks or, equivalently, as finite automata with real-valued clocks. Model checking answers the question which states of a real-time program satisfy a branching-time specification (given in an extension of CTL with clock variables). We develop an algorithm that computes this set of states symbolically as a fixpoint of a functional on state predicates, without constructing the state space. For this purpose, we introduce a μ-calculus on computation trees over real-numbered time. Unfortunately, many standard program properties, such as response for all nonzeno execution sequences (during which time diverges), cannot be characterized by fixpoints: we show that the expressiveness of the timed μ-calculus is incomparable to the expressiveness of timed CTL. Fortunately, this result does not impair the symbolic verification of "implementable" real-time programs-those whose safety constraints are machine-closed with respect to diverging time and whose fairness constraints are restricted to finite upper bounds on clock values. All timed CTL properties of such programs are shown to be computable as finitely approximable fixpoints in a simple decidable theory.
A Comparative Study of Distributed Learning Environments on Learning Outcomes Advances in information and communication technologies have fueled rapid growth in the popularity of technology-supported distributed learning (DL). Many educational institutions, both academic and corporate, have undertaken initiatives that leverage the myriad of available DL technologies. Despite their rapid growth in popularity, however, alternative technologies for DL are seldom systematically evaluated for learning efficacy. Considering the increasing range of information and communication technologies available for the development of DL environments, we believe it is paramount for studies to compare the relative learning outcomes of various technologies.In this research, we employed a quasi-experimental field study approach to investigate the relative learning effectiveness of two collaborative DL environments in the context of an executive development program. We also adopted a framework of hierarchical characteristics of group support system (GSS) technologies, outlined by DeSanctis and Gallupe (1987), as the basis for characterizing the two DL environments.One DL environment employed a simple e-mail and listserv capability while the other used a sophisticated GSS (herein referred to as Beta system). Interestingly, the learning outcome of the e-mail environment was higher than the learning outcome of the more sophisticated GSS environment. The post-hoc analysis of the electronic messages indicated that the students in groups using the e-mail system exchanged a higher percentage of messages related to the learning task. The Beta system users exchanged a higher level of technology sense-making messages. No significant difference was observed in the students' satisfaction with the learning process under the two DL environments.
A Framework of Joint Mobile Energy Replenishment and Data Gathering in Wireless Rechargeable Sensor Networks Recent years have witnessed the rapid development and proliferation of techniques on improving energy efficiency for wireless sensor networks. Although these techniques can relieve the energy constraint on wireless sensors to some extent, the lifetime of wireless sensor networks is still limited by sensor batteries. Recent studies have shown that energy rechargeable sensors have the potential to provide perpetual network operations by capturing renewable energy from external environments. However, the low output of energy capturing devices can only provide intermittent recharging opportunities to support low-rate data services due to spatial-temporal, geographical or environmental factors. To provide steady and high recharging rates and achieve energy efficient data gathering from sensors, in this paper, we propose to utilize mobility for joint energy replenishment and data gathering. In particular, a multi-functional mobile entity, called SenCarin this paper, is employed, which serves not only as a mobile data collector that roams over the field to gather data via short-range communication but also as an energy transporter that charges static sensors on its migration tour via wireless energy transmissions. Taking advantages of SenCar's controlled mobility, we focus on the joint optimization of effective energy charging and high-performance data collections. We first study this problem in general networks with random topologies. We give a two-step approach for the joint design. In the first step, the locations of a subset of sensors are periodically selected as anchor points, where the SenCar will sequentially visit to charge the sensors at these locations and gather data from nearby sensors in a multi-hop fashion. To achieve a desirable balance between energy replenishment amount and data gathering latency, we provide a selection algorithm to search for a maximum number of anchor points where sensors hold the least battery energy, and meanwhile by visiting them, - he tour length of the SenCar is no more than a threshold. In the second step, we consider data gathering performance when the SenCar migrates among these anchor points. We formulate the problem into a network utility maximization problem and propose a distributed algorithm to adjust data rates at which sensors send buffered data to the SenCar, link scheduling and flow routing so as to adapt to the up-to-date energy replenishing status of sensors. Besides general networks, we also study a special scenario where sensors are regularly deployed. For this case we can provide a simplified solution of lower complexity by exploiting the symmetry of the topology. Finally, we validate the effectiveness of our approaches by extensive numerical results, which show that our solutions can achieve perpetual network operations and provide high network utility.
Finite-Time Adaptive Fuzzy Tracking Control Design for Nonlinear Systems. This paper addresses the finite-time tracking problem of nonlinear pure-feedback systems. Unlike the literature on traditional finite-time stabilization, in this paper the nonlinear system functions, including the bounding functions, are all totally unknown. Fuzzy logic systems are used to model those unknown functions. To present a finite-time control strategy, a criterion of semiglobal practical...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.1
0.016667
0
0
0
0
0
0
0
A robust adaptive nonlinear control design An adaptive control design procedure for a class of nonlinear systems with both parametric uncertainty and unknown nonlinearities is presented. The unknown nonlinearities lie within some 'bounding functions', which are assumed to be partially known. The key assumption is that the uncertain terms satisfy a 'triangularity condition'. As illustrated by examples, the proposed design procedure expands the class of nonlinear systems for which global adaptive stabilization methods can be applied. The overall adaptive scheme is shown to guarantee global uniform ultimate boundedness.
Prescribed Performance Cooperative Control for Multiagent Systems With Input Quantization. This paper studies the quantized cooperative control problem for multiagent systems with unknown gains in the prescribed performance. Different from the finite-time control, a speed function is designed to realize that the tracking errors converge to a prescribed compact set in a given finite time for multiagent systems. Meanwhile, we consider the problem of unknown gains and input quantization, which can be addressed by using a lemma and Nussbaum function in cooperative control. Moreover, the fuzzy logic systems are proposed to approximate the nonlinear function defined on a compact set. A distributed controller and adaptive laws are constructed based on the Lyapunov stability theory and backstepping method. Finally, the effectiveness of the proposed approach is illustrated by some numerical simulation results.
Fuzzy Adaptive Tracking Control of Wheeled Mobile Robots With State-Dependent Kinematic and Dynamic Disturbances Unlike most works based on pure nonholonomic constraint, this paper proposes a fuzzy adaptive tracking control method for wheeled mobile robots, where unknown slippage occurs and violates the nonholononomic constraint in the form of state-dependent kinematic and dynamic disturbances. These disturbances degrade tracking performance significantly and, therefore, should be compensated. To this end, the kinematics with state-dependent disturbances are rigorously derived based on the general form of slippage in the mobile robots, and fuzzy adaptive observers together with parameter adaptation laws are designed to estimate the state-dependent disturbances in both kinematics and dynamics. Because of the modular structure of the proposed method, it can be easily combined with the previous controllers based on the model with the pure nonholonomic constraint, such that the combination of the fuzzy adaptive observers with the previously proposed backstepping-like feedback linearization controller can guarantee the trajectory tracking errors to be globally ultimately bounded, even when the nonholonomic constraint is violated, and their ultimate bounds can be adjusted appropriately for various types of trajectories in the presence of large initial tracking errors and disturbances. Both the stability analysis and simulation results are provided to validate the proposed controller.
Gateway Framework for In-Vehicle Networks based on CAN, FlexRay and Ethernet This paper proposes a gateway framework for in-vehicle networks based on CAN, FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify, in order to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface. The gateway framework provides state of the art functionalities that include parallel reprogramming, diagnostic routing, network management, dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.
Adaptive neural control for a class of stochastic nonlinear systems by backstepping approach. This paper addresses adaptive neural control for a class of stochastic nonlinear systems which are not in strict-feedback form. Based on the structural characteristics of radial basis function (RBF) neural networks (NNs), a backstepping design approach is extended from stochastic strict-feedback systems to a class of more general stochastic nonlinear systems. In the control design procedure, RBF NNs are used to approximate unknown nonlinear functions and the backstepping technique is utilized to construct the desired controller. The proposed adaptive neural controller guarantees that all the closed-loop signals are bounded and the tracking error converges to a sufficiently small neighborhood of the origin. Two simulation examples are used to illustrate the effectiveness of the proposed approach.
Neural-Network-Based Adaptive Event-Triggered Consensus Control of Nonstrict-Feedback Nonlinear Systems The event-triggered consensus control problem is studied for nonstrict-feedback nonlinear systems with a dynamic leader. Neural networks (NNs) are utilized to approximate the unknown dynamics of each follower and its neighbors. A novel adaptive event-trigger condition is constructed, which depends on the relative output measurement, the NN weights estimations, and the states of each follower. Based on the designed event-trigger condition, an adaptive NN controller is developed by using the backstepping control design technique. In the control design process, the algebraic loop problem is overcome by utilizing the property of NN basis functions and by designing novel adaptive parameter laws of the NN weights. The proposed adaptive NN event-triggered controller does not need continuous communication among neighboring agents, and it can substantially reduce the data communication and the frequency of the controller updates. It is proven that ultimately bounded leader-following consensus is achieved without exhibiting the Zeno behavior. The effectiveness of the theoretical results is verified through simulation studies.
Nonlinear Output Feedback Finite-Time Control for Vehicle Active Suspension Systems In this paper, an output feedback finite-time control method is investigated for stabilizing the perturbed vehicle active suspension system to improve the suspension performance. Since physical suspension systems always exist in the phenomenon of uncertainty or external disturbance, a novel disturbance compensator with finite-time convergence performance is proposed for efficiently compensating the unknown external disturbance. Moreover, the presented compensator is advantageous over the existing ones since it is continuous and can completely remove the matched disturbance. From the viewpoint of practical implementation, continuous control law will not lead to chattering, which is desirable for electrical and mechanical systems. For the nominal suspension system without disturbance, a homogeneous controller with a simple filter is constructed to achieve a finite-time convergence property, where the filter is applied to obtain the unknown velocity signal. Thus, the nominal controller combines a disturbance compensator into an overall continuous control law, which provides two independent parts with a separate design unit and a high flexibility for selecting the control gains. According to the geometric homogeneity and finite-time separation principle, it can be shown that the active suspension is finite-time stabilized. A designed example is given to illustrate the effectiveness of the presented controller for improving the vehicle ride performance.
Adaptive compensation for actuator failures with event-triggered input. In this paper, we study the problem of event-triggered control for a class of uncertain nonlinear systems subject to actuator failures. The actuator failures are allowed to be unknown and the total number of failures could be infinite. To reduce the communication burden from the controller to the actuator, a novel event-triggered control law is designed. It is proved through Lyapunov analyses that the proposed control protocol ensures that all the signals of the closed-loop system are globally bounded and the system output tracking error can exponentially converge to a residual which can be made arbitrarily small.
Leader-Following Consensus for a Class of Nonlinear Strick-Feedback Multiagent Systems With State Time-Delays. This paper studies the leader-following consensus problem for a class of strict-feedback multiagent systems with unknown nonlinearities and state time-delays under directed topology. By using the backstepping technique, an adaptive consensus control protocol is proposed, where neural networks are employed to neutralize uncertain nonlinearities. To eliminate the effects of time-delays, Lyapunov-Kra...
Switching LPV control designs using multiple parameter-dependent Lyapunov functions In this paper we study the switching control of linear parameter-varying (LPV) systems using multiple parameter-dependent Lyapunov functions to improve performance and enhance control design flexibility. A family of LPV controllers is designed, each suitable for a specific parameter subregion. They are switched so that the closed-loop system remains stable and its performance is optimized. Two switching logics, hysteresis switching and switching with average dwell time, are examined. The control synthesis conditions for both switching logics are formulated as matrix optimization problems, which are generally non-convex but can be convexified under some simplifying assumptions. The hysteresis switching LPV control scheme is then applied to an active magnetic bearing problem.
Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.
Block-Based And Multi-Resolution Methods For Ear Recognition Using Wavelet Transform And Uniform Local Binary Patterns This paper proposes a novel method based on Haar wavelet transform and uniform local binary patterns(ULBPs) to recognize ear images. Firstly, ear images are decomposed by Haar wavelet transform. Then ULBPs are combined simultaneously with block-based and multi-resolution methods to describe together the texture features of ear sub-images transformed by Haar wavelet. Finally, the texture features are classified by the nearest neighbor method. Experimental results show that Haar wavelet transform can boost effectively up intensity information of texture unit. It is not only fast but also robust to use ULBPs to extract texture features. The recognition rates of the method proposed by this paper outperform remarkably those of the classic PCA or KPCA especially when combining block-based and multi-resolution methods.
Generative adversarial networks: introduction and outlook Recently, generative adversarial networks U+0028 GANs U+0029 have become a research focus of artificial intelligence. Inspired by two-player zero-sum game, GANs comprise a generator and a discriminator, both trained under the adversarial learning idea. The goal of GANs is to estimate the potential distribution of real data samples and generate new samples from that distribution. Since their initia...
Myoelectric or Force Control? A Comparative Study on a Soft Arm Exosuit The intention-detection strategy used to drive an exosuit is fundamental to evaluate the effectiveness and acceptability of the device. Yet, current literature on wearable soft robotics lacks evidence on the comparative performance of different control approaches for online intention-detection. In the present work, we compare two different and complementary controllers on a wearable robotic suit, previously formulated and tested by our group; a model-based myoelectric control ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> ), which estimates the joint torque from the activation of target muscles, and a force control that estimates human torques using an inverse dynamics model ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> ). We test them on a cohort of healthy participants performing tasks replicating functional activities of daily living involving a wide range of dynamic movements. Our results suggest that both controllers are robust and effective in detecting human–motor interaction, and show comparable performance for augmenting muscular activity. In particular, the biceps brachii activity was reduced by up to 74% under the assistance of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> and up to 47% under the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to a no-suit condition. However, the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> outperformed the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> in promptness and assistance during movements that involve high dynamics. The exosuit work normalized with respect to the overall work was <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$68.84 \pm 3.81\%$</tex-math></inline-formula> when it was ran by the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$45.29 \pm 7.71\%$</tex-math></inline-formula> during the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> condition. The reliability and accuracy of motor intention detection strategies in wearable device is paramount for both the efficacy and acceptability of this technology. In this article, we offer a detailed analysis of the two most widely used control approaches, trying to highlight their intrinsic structural differences and to discuss their different and complementary performance.
1.040789
0.04
0.04
0.04
0.04
0.04
0.021333
0.006667
0.001333
0
0
0
0
0
Enhanced Object Detection With Deep Convolutional Neural Networks for Advanced Driving Assistance Object detection is a critical problem for advanced driving assistance systems (ADAS). Recently, convolutional neural networks (CNN) achieved large successes on object detection, with performance improvement over traditional approaches, which use hand-engineered features. However, due to the challenging driving environment (e.g., large object scale variation, object occlusion, and bad light conditions), popular CNN detectors do not achieve very good object detection accuracy over the KITTI autonomous driving benchmark dataset. In this paper, we propose three enhancements for CNN-based visual object detection for ADAS. To address the large object scale variation challenge, deconvolution and fusion of CNN feature maps are proposed to add context and deeper features for better object detection at low feature map scales. In addition, soft non-maximal suppression (NMS) is applied across object proposals at different feature scales to address the object occlusion challenge. As the cars and pedestrians have distinct aspect ratio features, we measure their aspect ratio statistics and exploit them to set anchor boxes properly for better object matching and localization. The proposed CNN enhancements are evaluated with various image input sizes by experiments over KITTI dataset. The experimental results demonstrate the effectiveness of the proposed enhancements with good detection performance over KITTI test set.
Generative Adversarial Networks for Parallel Transportation Systems. Generative Adversaria Networks (GANs) have emerged as a promising and effective mechanism for machine learning due to its recent successful applications. GANs share the same idea of producing, testing, acquiring, and utilizing data as well as knowledge based on artificial systems, computational experiments, and parallel execution of actual and virtual scenarios, as outlined in the theory of parall...
Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges AbstractRecent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of “what to fuse”, “when to fuse”, and “how to fuse” remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
VTGNet: A Vision-Based Trajectory Generation Network for Autonomous Vehicles in Urban Environments Traditional methods for autonomous driving are implemented with many building blocks from perception, planning and control, making them difficult to generalize to varied scenarios due to complex assumptions and interdependencies. Recently, the end-to-end driving method has emerged, which performs well and generalizes to new environments by directly learning from expert-provided data. However, many...
Traffic Flow Imputation Using Parallel Data and Generative Adversarial Networks Traffic data imputation is critical for both research and applications of intelligent transportation systems. To develop traffic data imputation models with high accuracy, traffic data must be large and diverse, which is costly. An alternative is to use synthetic traffic data, which is cheap and easy-access. In this paper, we propose a novel approach using parallel data and generative adversarial networks (GANs) to enhance traffic data imputation. Parallel data is a recently proposed method of using synthetic and real data for data mining and data-driven process, in which we apply GANs to generate synthetic traffic data. As it is difficult for the standard GAN algorithm to generate time-dependent traffic flow data, we made twofold modifications: 1) using the real data or the corrupted ones instead of random vectors as latent codes to generator within GANs and 2) introducing a representation loss to measure discrepancy between the synthetic data and the real data. The experimental results on a real traffic dataset demonstrate that our method can significantly improve the performance of traffic data imputation.
ParaUDA: Invariant Feature Learning With Auxiliary Synthetic Samples for Unsupervised Domain Adaptation Recognizing and locating objects by algorithms are essential and challenging issues for Intelligent Transportation Systems. However, the increasing demand for much labeled data hinders the further application of deep learning-based object detection. One of the optimal solutions is to train the target model with an existing dataset and then adapt it to new scenes, namely Unsupervised Domain Adaptation (UDA). However, most of existing methods at the pixel level mainly focus on adapting the model from source domain to target domain and ignore the essence of UDA to learn domain-invariant feature learning. Meanwhile, almost all methods at the feature level ignore to make conditional distributions matched for UDA while conducting feature alignment between source and target domain. Considering these problems, this paper proposes the ParaUDA, a novel framework of learning invariant representations for UDA in two aspects: pixel level and feature level. At the pixel level, we adopt CycleGAN to conduct domain transfer and convert the problem of original unsupervised domain adaptation to supervised domain adaptation. At the feature level, we adopt an adversarial adaption model to learn domain-invariant representation by aligning the distributions of domains between different image pairs with same mixture distributions. We evaluate our proposed framework in different scenes, from synthetic scenes to real scenes, from normal weather to challenging weather, and from scenes across cameras. The results of all the above experiments show that ParaUDA is effective and robust for adapting object detection models from source scenes to target scenes.
China's 12-Year Quest of Autonomous Vehicular Intelligence: The Intelligent Vehicles Future Challenge Program In this article, we introduce the Intelligent Vehicles Future Challenge of China (IVFC), which has lasted 12 years. Some key features of the tests and a few interesting findings of IVFC are selected and presented. Through the IVFCs held between 2009 and 2020, we gradually established a set of theories, methods, and tools to collect tests? data and efficiently evaluate the performance of autonomous vehicles so that we could learn how to improve both the autonomous vehicles and the testing system itself.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
A comparative study of texture measures with classification based on featured distributions This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented
Social Perception and Steering for Online Avatars This paper presents work on a new platform for producing realistic group conversation dynamics in shared virtual environments. An avatar, representing users, should perceive the surrounding social environment just as humans would, and use the perceptual information for driving low level reactive behaviors. Unconscious reactions serve as evidence of life, and can also signal social availability and spatial awareness to others. These behaviors get lost when avatar locomotion requires explicit user control. For automating such behaviors we propose a steering layer in the avatars that manages a set of prioritized behaviors executed at different frequencies, which can be activated or deactivated and combined together. This approach gives us enough flexibility to model the group dynamics of social interactions as a set of social norms that activate relevant steering behaviors. A basic set of behaviors is described for conversations, some of which generate a social force field that makes the formation of conversation groups fluidly adapt to external and internal noise, through avatar repositioning and reorientations. The resulting social group behavior appears relatively robust, but perhaps more importantly, it starts to bring a new sense of relevance and continuity to the virtual bodies that often get separated from the ongoing conversation in the chat window.
Node Reclamation and Replacement for Long-Lived Sensor Networks When deployed for long-term tasks, the energy required to support sensor nodes' activities is far more than the energy that can be preloaded in their batteries. No matter how the battery energy is conserved, once the energy is used up, the network life terminates. Therefore, guaranteeing long-term energy supply has persisted as a big challenge. To address this problem, we propose a node reclamation and replacement (NRR) strategy, with which a mobile robot or human labor called mobile repairman (MR) periodically traverses the sensor network, reclaims nodes with low or no power supply, replaces them with fully charged ones, and brings the reclaimed nodes back to an energy station for recharging. To effectively and efficiently realize the strategy, we develop an adaptive rendezvous-based two-tier scheduling scheme (ARTS) to schedule the replacement/reclamation activities of the MR and the duty cycles of nodes. Extensive simulations have been conducted to verify the effectiveness and efficiency of the ARTS scheme.
Haptic feedback for enhancing realism of walking simulations. In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.
Vehicular Sensing Networks in a Smart City: Principles, Technologies and Applications. Given the escalating population across the globe, it has become paramount to construct smart cities, aiming for improving the management of urban flows relying on efficient information and communication technologies (ICT). Vehicular sensing networks (VSNs) play a critical role in maintaining the efficient operation of smart cities. Naturally, there are numerous challenges to be solved before the w...
Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem Group scheduling problems have attracted much attention owing to their many practical applications. This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time. It is originated from an important industrial process, i.e., wire rod and bar rolling process in steel production systems. Two objecti...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0.006897
0
0
0
0
0
0
A soft exoskeleton for hand assistive and rehabilitation application using pneumatic actuators with variable stiffness In this paper, we present the design of a soft wearable exoskeleton that comprises of a glove embedded with pneumatic actuators of variable stiffness for hand assistive and rehabilitation application. The device is lightweight and easily wearable due to the usage of soft pneumatic actuators. A key feature of the device is the variable stiffness of the actuators at different localities that not only conform to the finger profile during actuation, but also provides customizability for different hand dimensions. The actuators can achieve different bending profiles with variable stiffness implemented at different localities. Therefore, the device is able to perform different hand therapy exercises such as full fist, straight fist, hook fist and table top. The device was characterized in terms of its range of motion and maximum force output. Experiments were conducted to examine the differences between active and passive actuation. The results showed that the device could achieve hand grasping and pinching with acceptable range of motion and force.
Exoskeletons for human power augmentation The first load-bearing and energetically autonomous exoskeleton, called the Berkeley Lower Extremity Exoskeleton (BLEEX) walks at the average speed of two miles per hour while carrying 75 pounds of load. The project, funded in 2000 by the Defense Advanced Research Project Agency (DARPA) tackled four fundamental technologies: the exoskeleton architectural design, a control algorithm, a body LAN to host the control algorithm, and an on-board power unit to power the actuators, sensors and the computers. This article gives an overview of the BLEEX project.
Sensing pressure distribution on a lower-limb exoskeleton physical human-machine interface. A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer's skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented.
A soft wearable robotic device for active knee motions using flat pneumatic artificial muscles We present the design of a soft wearable robotic device composed of elastomeric artificial muscle actuators and soft fabric sleeves, for active assistance of knee motions. A key feature of the device is the two-dimensional design of the elastomer muscles that not only allows the compactness of the device, but also significantly simplifies the manufacturing process. In addition, the fabric sleeves make the device lightweight and easily wearable. The elastomer muscles were characterized and demonstrated an initial contraction force of 38N and maximum contraction of 18mm with 104kPa input pressure, approximately. Four elastomer muscles were employed for assisted knee extension and flexion. The robotic device was tested on a 3D printed leg model with an articulated knee joint. Experiments were conducted to examine the relation between systematic change in air pressure and knee extension-flexion. The results showed maximum extension and flexion angles of 95° and 37°, respectively. However, these angles are highly dependent on underlying leg mechanics and positions. The device was also able to generate maximum extension and flexion forces of 3.5N and 7N, respectively.
Eyes are faster than hands: A soft wearable robot learns user intention from the egocentric view. To perceive user intentions for wearable robots, we present a learning-based intention detection methodology using a first-person-view camera.
Development of muscle suit for upper limb We have been developing a "muscle suit" that provides muscular support to the paralyzed or those otherwise unable to move unaided, as well as to manual workers. The muscle suit is a garment without a metal frame and uses a McKibben actuator driven by compressed air. Because actuators are sewn into the garment, no metal frame is needed, making the muscle suit very light and cheap. With the muscle suit, the patient can willfully control his or her movement. The muscle suit is very helpful for both muscular and emotional support. We propose an armor-type muscle suit in order to overcome issues of a prototype system and then show how abduction motion, which we believe, is the most difficult motion for the upper body, is realized.
Power Assist System HAL-3 for Gait Disorder Person We have developed the power assistive suit, HAL (Hybrid Assistive Leg) which provide the self-walking aid for gait disorder persons or aged persons. In this paper, We introduce HAL-3 system, improving HAL-1,2 systems which had developed previously. EMG signal was used as the input information of power assist controller. We propose a calibration method to identify parameters which relates the EMG to joint torque by using HAL-3. We could obtain suitable torque estimated by EMG and realize an apparatus that enables power to be used for walking and standing up according to the intention of the operator.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Theory and Experiment on Formation-Containment Control of Multiple Multirotor Unmanned Aerial Vehicle Systems. Formation-containment control problems for multiple multirotor unmanned aerial vehicle (UAV) systems with directed topologies are studied, where the states of leaders form desired formation and the states of followers converge to the convex hull spanned by those of the leaders. First, formation-containment protocols are constructed based on the neighboring information of UAVs. Then, sufficient con...
Response time in man-computer conversational transactions The literature concerning man-computer transactions abounds in controversy about the limits of "system response time" to a user's command or inquiry at a terminal. Two major semantic issues prohibit resolving this controversy. One issue centers around the question of "Response time to what?" The implication is that different human purposes and actions will have different acceptable or useful response times.
Human Shoulder Modeling Including Scapulo-Thoracic Constraint And Joint Sinus Cones In virtual human modeling, the shoulder is usually composed of clavicular, scapular and arm segments related by rotational joints. Although the model is improved, the realistic animation of the shoulder is hardly achieved. This is due to the fact that it is difficult to coordinate the simultaneous motion of the shoulder components in a consistent way. Also, the common use of independent one-degree of freedom (DOF) joint hierarchies does not properly render the 3-D accessibility space of real joints. On the basis of former biomechanical investigations, we propose here an extended shoulder model including scapulo-thoracic constraint and joint sinus cones. As a demonstration, the model is applied, using inverse kinematics, to the animation of a 3-D anatomic muscled skeleton model. (C) 2000 Elsevier Science Ltd. All rights reserved.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.213778
0.213778
0.213778
0.213778
0.213778
0.106889
0.023774
0
0
0
0
0
0
0
Parallel Transportation Systems: Toward IoT-Enabled Smart Urban Traffic Control and Management IoT-driven intelligent transportation systems (ITS) have great potential and capacity to make transportation systems efficient, safe, smart, reliable, and sustainable. The IoT provides the access and driving forces of seamlessly integrating transportation systems from the physical world to the virtual counterparts in the cyber world. In this paper, we present visions and works on integrating the artificial intelligent transportation systems and the real intelligent transportation systems to create and enhance “intelligence” of IoT-enabled ITS. With the increasing ubiquitous and deep sensing capacity of IoT-enabled ITS, we can quickly create artificial transportation systems equivalent to physical transportation systems in computers, and thus have parallel intelligent transportation systems, i.e. the real intelligent transportation systems and artificial intelligent transportation systems. The evolution process of transportation system is studied in the view of the parallel world. We can use a large number of long-term iterative simulation to predict and analyze the expected results of operations. Thus, truly effective and smart ITS can be planned, designed, built, operated and used. The foundation of the parallel intelligent transportation systems is based on the ACP theory, which is composed of artificial societies, computational experiments, and parallel execution. We also present some case studies to demonstrate the effectiveness of parallel transportation systems.
Privacy Enabled Digital Rights Management Without Trusted Third Party Assumption Digital rights management systems are required to provide security and accountability without violating the privacy of the entities involved. However, achieving privacy along with accountability in the same framework is hard as these attributes are mutually contradictory. Thus, most of the current digital rights management systems rely on trusted third parties to provide privacy to the entities involved. However, a trusted third party can become malicious and break the privacy protection of the entities in the system. Hence, in this paper, we propose a novel privacy preserving content distribution mechanism for digital rights management without relying on the trusted third party assumption. We use simple primitives such as blind decryption and one way hash chain to avoid the trusted third party assumption. We prove that our scheme is not prone to the “oracle problem” of the blind decryption mechanism. The proposed mechanism supports access control without degrading user's privacy as well as allows revocation of even malicious users without violating their privacy.
An efficient conditionally anonymous ring signature in the random oracle model A conditionally anonymous ring signature is an exception since the anonymity is conditional. Specifically, it allows an entity to confirm/refute the signature that he generated before. A group signature also shares the same property since a group manager can revoke a signer's anonymity using the trapdoor information. However, the special node (i.e., group manager) does not exist in the group in order to satisfy the ad hoc fashion. In this paper, we construct a new conditionally anonymous ring signature, in which the actual signer can be traced without the help of the group manager. The big advantage of the confirmation and disavowal protocols designed by us are non-interactive with constant costs while the known schemes suffer from the linear cost in terms of the ring size n or security parameter s.
Threats to Networking Cloud and Edge Datacenters in the Internet of Things. Several application domains are collecting data using Internet of Things sensing devices and shipping it to remote cloud datacenters for analysis (fusion, storage, and processing). Data analytics activities raise a new set of technical challenges from the perspective of ensuring end-to-end security and privacy of data as it travels from an edge datacenter (EDC) to a cloud datacenter (CDC) (or vice...
Generative Adversarial Networks for Parallel Transportation Systems. Generative Adversaria Networks (GANs) have emerged as a promising and effective mechanism for machine learning due to its recent successful applications. GANs share the same idea of producing, testing, acquiring, and utilizing data as well as knowledge based on artificial systems, computational experiments, and parallel execution of actual and virtual scenarios, as outlined in the theory of parall...
VTGNet: A Vision-Based Trajectory Generation Network for Autonomous Vehicles in Urban Environments Traditional methods for autonomous driving are implemented with many building blocks from perception, planning and control, making them difficult to generalize to varied scenarios due to complex assumptions and interdependencies. Recently, the end-to-end driving method has emerged, which performs well and generalizes to new environments by directly learning from expert-provided data. However, many...
SPDS: A Secure and Auditable Private Data Sharing Scheme for Smart Grid Based on Blockchain The exponential growth of data generated from increasing smart meters and smart appliances brings about huge potentials for more efficient energy production, pricing, and personalized energy services in smart grids. However, it also causes severe concerns due to improper use of individuals' private data, as well as the lack of transparency and auditability for data usage. To bridge this gap, in this article, we propose a secure and auditable private data sharing (SPDS) scheme under data processing-as-a-service mode in smart grid. Specifically, we first present a novel blockchain-based framework for trust-free private data computation and data usage tracking, where smart contracts are employed to specify fine-grained data usage policies (i.e., who can access what kinds of data, for what purposes, at what price) while the distributed ledgers keep an immutable and transparent record of data usage. A trusted execution environment based off-chain smart contract execution mechanism is exploited as well to process confidential user datasets and relieve the computation overhead in blockchain systems. A two-phase atomic delivery protocol is designed to ensure the atomicity of data transactions in computing result release and payment. Furthermore, based on contract theory, the optimal contracts are designed under information asymmetry to stimulate user's participation and high-quality data sharing while optimizing the payoff of the energy service provider. Extensive simulation results demonstrate that the proposed SPDS can effectively improve the payoffs of participants, compared with conventional schemes.
Speed and Accuracy Tradeoff for LiDAR Data Based Road Boundary Detection Road boundary detection is essential for autonomous vehicle localization and decision-making, especially under GPS signal loss and lane discontinuities. For road boundary detection in structural environments, obstacle occlusions and large road curvature are two significant challenges. However, an effective and fast solution for these problems has remained elusive. To solve these problems, a speed ...
Crowd sensing of traffic anomalies based on human mobility and social media The advances in mobile computing and social networking services enable people to probe the dynamics of a city. In this paper, we address the problem of detecting and describing traffic anomalies using crowd sensing with two forms of data, human mobility and social media. Traffic anomalies are caused by accidents, control, protests, sport events, celebrations, disasters and other events. Unlike existing traffic-anomaly-detection methods, we identify anomalies according to drivers' routing behavior on an urban road network. Here, a detected anomaly is represented by a sub-graph of a road network where drivers' routing behaviors significantly differ from their original patterns. We then try to describe the detected anomaly by mining representative terms from the social media that people posted when the anomaly happened. The system for detecting such traffic anomalies can benefit both drivers and transportation authorities, e.g., by notifying drivers approaching an anomaly and suggesting alternative routes, as well as supporting traffic jam diagnosis and dispersal. We evaluate our system with a GPS trajectory dataset generated by over 30,000 taxicabs over a period of 3 months in Beijing, and a dataset of tweets collected from WeiBo, a Twitter-like social site in China. The results demonstrate the effectiveness and efficiency of our system.
On the History of the Minimum Spanning Tree Problem It is standard practice among authors discussing the minimum spanning tree problem to refer to the work of Kruskal(1956) and Prim (1957) as the sources of the problem and its first efficient solutions, despite the citation by both of Boruvka (1926) as a predecessor. In fact, there are several apparently independent sources and algorithmic solutions of the problem. They have appeared in Czechoslovakia, France, and Poland, going back to the beginning of this century. We shall explore and compare these works and their motivations, and relate them to the most recent advances on the minimum spanning tree problem.
Who is Afraid of the Humanoid? Investigating Cultural Differences in the Acceptance of Robots
A distributed event-triggered transmission strategy for sampled-data consensus of multi-agent systems. This paper is concerned with event-triggered sampled-data consensus for distributed multi-agent systems with directed graph. A novel distributed event-triggered sampled-data transmission strategy is proposed, which allows the event-triggering condition to be intermittently examined at constant sampling instants. Based on this novel strategy, a sampled-data consensus control protocol is presented, with which the consensus of distributed multi-agent systems can be transformed into the stability of a system with a time-varying delay. Then, a sufficient condition on the consensus of the multi-agent system is derived. Correspondingly, a co-design algorithm for obtaining both the parameters of the distributed event-triggered transmission strategy and the consensus controller gain is proposed. Two numerical examples are given to show the effectiveness of the proposed method.
Digital watermarking: Applicability for developing trust in medical imaging workflows state of the art review. Medical images can be intentionally or unintentionally manipulated both within the secure medical system environment and outside, as images are viewed, extracted and transmitted. Many organisations have invested heavily in Picture Archiving and Communication Systems (PACS), which are intended to facilitate data security. However, it is common for images, and records, to be extracted from these for a wide range of accepted practices, such as external second opinion, transmission to another care provider, patient data request, etc. Therefore, confirming trust within medical imaging workflows has become essential. Digital watermarking has been recognised as a promising approach for ensuring the authenticity and integrity of medical images. Authenticity refers to the ability to identify the information origin and prove that the data relates to the right patient. Integrity means the capacity to ensure that the information has not been altered without authorisation.
Shared control of highly automated vehicles using steer-by-wire systems A shared control of highly automated Steer-by-Wire system is proposed for cooperative driving between the driver and vehicle in the face of driver ʼ s abnormal driving. A fault detection scheme is designed to detect the abnormal driving behaviour and transfer the control of the car to the automatic system designed based on a fault tolerant model predictive control ( MPC ) controller driving the vehicle along an optimal safe path. The proposed concept and control algorithm are tested in a number of scenarios representing intersection, lane change and different types of driver ʼ s abnormal behaviour. The simulation results show the feasibility and effectiveness of the proposed method.
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
Scalable Cell-Free Massive MIMO Systems Imagine a coverage area with many wireless access points that cooperate to jointly serve the users, instead of creating autonomous cells. Such a cell-free network operation can potentially resolve many of the interference issues that appear in current cellular networks. This ambition was previously called Network MIMO (multiple-input multiple-output) and has recently reappeared under the name Cell-Free Massive MIMO. The main challenge is to achieve the benefits of cell-free operation in a practically feasible way, with computational complexity and fronthaul requirements that are scalable to large networks with many users. We propose a new framework for scalable Cell-Free Massive MIMO systems by exploiting the dynamic cooperation cluster concept from the Network MIMO literature. We provide a novel algorithm for joint initial access, pilot assignment, and cluster formation that is proved to be scalable. Moreover, we adapt the standard channel estimation, precoding, and combining methods to become scalable. A new uplink and downlink duality is proved and used to heuristically design the precoding vectors on the basis of the combining vectors. Interestingly, the proposed scalable precoding and combining outperform conventional maximum ratio (MR) processing and also performs closely to the best unscalable alternatives.
Cell-Free Massive MIMO: A New Next-Generation Paradigm. Cell-free (CF) massive multiple-input-multiple-output (MIMO) systems have a large number of individually controllable antennas distributed over a wide area for simultaneously serving a small number of user equipments (UEs). This solution has been considered as a promising next-generation technology due to its ability to offer a similar quality of service to all UEs despite its low-complexity signal processing. In this paper, we provide a comprehensive survey of CF massive MIMO systems. To be more specific, the benefit of the so-called channel hardening and the favorable propagation conditions are exploited. Furthermore, we quantify the advantages of CF massive MIMO systems in terms of their energy- and cost-efficiency. Additionally, the signal processing techniques invoked for reducing the fronthaul burden for joint channel estimation and for transmit precoding are analyzed. Finally, the open research challenges in both its deployment and network management are highlighted.
Centralized And Distributed Power Allocation For Max-Min Fairness In Cell-Free Massive Mimo Cell-free Massive MIMO systems consist of a large number of geographically distributed access points (APs) that serve users by coherent joint transmission. Downlink power allocation is important in these systems, to determine which APs should transmit to which users and with what power. If the system is implemented correctly, it can deliver a more uniform user performance than conventional cellular networks. To this end, previous works have shown how to perform system-wide max-min fairness power allocation when using maximum ratio precoding. In this paper, we first generalize this method to arbitrary precoding, and then train a neural network to perform approximately the same power allocation but with reduced computational complexity. Finally, we train one neural network per AP to mimic system-wide max-min fairness power allocation, but using only local information. By learning the structure of the local propagation environment, this method outperforms the state-of-the-art distributed power allocation method from the Cell-free Massive MIMO literature.
Deep Reinforcement Learning for Energy-Efficient Beamforming Design in Cell-Free Networks Cell-free network is considered as a promising architecture for satisfying more demands of future wireless networks, where distributed access points coordinate with an edge cloud processor to jointly provide service to a smaller number of user equipments in a compact area. In this paper, the problem of uplink beamforming design is investigated for maximizing the long-term energy efficiency (EE) wi...
Cell-Free Massive MIMO: A Survey Towards a fully connected intelligent digital world, 5G and beyond networks experience a new era of Internet of intelligence with connected people and things. This new era brings challenging demands to the network, such as high spectral efficiency, low-latency, high-reliable communication, and high energy efficiency. One of the major technological breakthroughs to cope with these unprecedented dem...
Maximum ratio transmission This paper presents the concept, principles, and analysis of maximum ratio transmission for wireless communications, where multiple antennas are used for both transmission and reception. The principles and analysis are applicable to general cases, including maximum-ratio combining. Simulation results agree with the analysis. The analysis shows that the average overall signal-to-mise ratio (SNR) is proportional to the cross correlation between channel vectors and that error probability decreases inversely with the (L×K)th power of the average SNR
Cell-Free Massive MIMO versus Small Cells. A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs), which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max–min power control algorithms. Max–min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard. Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly fivefold improvement in 95%-likely per-user throughput over the small-cell scheme, and tenfold improvement when shadow fading is correlated.
Mobile Unmanned Aerial Vehicles (UAVs) for Energy-Efficient Internet of Things Communications. In this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (UAVs), used as aerial base stations to collect data from ground Internet of Things (IoT) devices, are investigated. In particular, to enable reliable uplink communications for the IoT devices with a minimum total transmit power, a novel framework is proposed for jointly optimizing the 3D placement and the mo...
A comparative study of texture measures with classification based on featured distributions This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented
Simultaneous localization and mapping: part I he simultaneous localization and mapping (SLAM) problem asks if it is possible for a mobile robot to be placed at an unknown location in an unknown envi- ronment and for the robot to incrementally build a consistent map of this environment while simultaneously determining its location within this map. A solution to the SLAM problem has been seen as a "holy grail" for the mobile robotics com- munity as it would provide the means to make a robot truly autonomous. The "solution" of the SLAM problem has been one of the notable successes of the robotics community over the past decade. SLAM has been formulated and solved as a theoretical problem in a number of different forms. SLAM has also been implemented in a number of different domains from indoor robots to outdoor, underwater, and airborne systems. At a theoretical and conceptual level, SLAM can now be consid- ered a solved problem. However, substantial issues remain in practically realizing more general SLAM solutions and notably in building and using perceptually rich maps as part of a SLAM algorithm. This two-part tutorial and survey of SLAM aims to provide a broad introduction to this rapidly growing field. Part I (this article) begins by providing a brief history of early develop- ments in SLAM. The formulation section introduces the struc- ture the SLAM problem in now standard Bayesian form, and explains the evolution of the SLAM process. The solution sec- tion describes the two key computational solutions to the SLAM problem through the use of the extended Kalman filter (EKF-SLAM) and through the use of Rao-Blackwellized par- ticle filters (FastSLAM). Other recent solutions to the SLAM problem are discussed in Part II of this tutorial. The application section describes a number of important real-world implemen- tations of SLAM and also highlights implementations where the sensor data and software are freely down-loadable for other researchers to study. Part II of this tutorial describes major issues in computation, convergence, and data association in SLAM. These are subjects that have been the main focus of the SLAM research community over the past five years.
Stabilization of switched continuous-time systems with all modes unstable via dwell time switching Stabilization of switched systems composed fully of unstable subsystems is one of the most challenging problems in the field of switched systems. In this brief paper, a sufficient condition ensuring the asymptotic stability of switched continuous-time systems with all modes unstable is proposed. The main idea is to exploit the stabilization property of switching behaviors to compensate the state divergence made by unstable modes. Then, by using a discretized Lyapunov function approach, a computable sufficient condition for switched linear systems is proposed in the framework of dwell time; it is shown that the time intervals between two successive switching instants are required to be confined by a pair of upper and lower bounds to guarantee the asymptotic stability. Based on derived results, an algorithm is proposed to compute the stability region of admissible dwell time. A numerical example is proposed to illustrate our approach.
RECIFE-MILP: An Effective MILP-Based Heuristic for the Real-Time Railway Traffic Management Problem The real-time railway traffic management problem consists of selecting appropriate train routes and schedules for minimizing the propagation of delay in case of traffic perturbation. In this paper, we tackle this problem by introducing RECIFE-MILP, a heuristic algorithm based on a mixed-integer linear programming model. RECIFE-MILP uses a model that extends one we previously proposed by including additional elements characterizing railway reality. In addition, it implements performance boosting methods selected among several ones through an algorithm configuration tool. We present a thorough experimental analysis that shows that the performances of RECIFE-MILP are better than the ones of the currently implemented traffic management strategy. RECIFE-MILP often finds the optimal solution to instances within the short computation time available in real-time applications. Moreover, RECIFE-MILP is robust to its configuration if an appropriate selection of the combination of boosting methods is performed.
Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN [27] and EnhanceNet [38].
Myoelectric or Force Control? A Comparative Study on a Soft Arm Exosuit The intention-detection strategy used to drive an exosuit is fundamental to evaluate the effectiveness and acceptability of the device. Yet, current literature on wearable soft robotics lacks evidence on the comparative performance of different control approaches for online intention-detection. In the present work, we compare two different and complementary controllers on a wearable robotic suit, previously formulated and tested by our group; a model-based myoelectric control ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> ), which estimates the joint torque from the activation of target muscles, and a force control that estimates human torques using an inverse dynamics model ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> ). We test them on a cohort of healthy participants performing tasks replicating functional activities of daily living involving a wide range of dynamic movements. Our results suggest that both controllers are robust and effective in detecting human–motor interaction, and show comparable performance for augmenting muscular activity. In particular, the biceps brachii activity was reduced by up to 74% under the assistance of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> and up to 47% under the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to a no-suit condition. However, the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> outperformed the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> in promptness and assistance during movements that involve high dynamics. The exosuit work normalized with respect to the overall work was <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$68.84 \pm 3.81\%$</tex-math></inline-formula> when it was ran by the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">myoprocessor</i> , compared to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$45.29 \pm 7.71\%$</tex-math></inline-formula> during the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">dynamic arm</i> condition. The reliability and accuracy of motor intention detection strategies in wearable device is paramount for both the efficacy and acceptability of this technology. In this article, we offer a detailed analysis of the two most widely used control approaches, trying to highlight their intrinsic structural differences and to discuss their different and complementary performance.
1.022
0.022
0.02
0.02
0.02
0.011
0.005776
0.00004
0
0
0
0
0
0
Magnetic, Acceleration Fields and Gyroscope Quaternion (MAGYQ)-based attitude estimation with smartphone sensors for indoor pedestrian navigation. The dependence of proposed pedestrian navigation solutions on a dedicated infrastructure is a limiting factor to the deployment of location based services. Consequently self-contained Pedestrian Dead-Reckoning (PDR) approaches are gaining interest for autonomous navigation. Even if the quality of low cost inertial sensors and magnetometers has strongly improved, processing noisy sensor signals combined with high hand dynamics remains a challenge. Estimating accurate attitude angles for achieving long term positioning accuracy is targeted in this work. A new Magnetic, Acceleration fields and GYroscope Quaternion (MAGYQ)-based attitude angles estimation filter is proposed and demonstrated with handheld sensors. It benefits from a gyroscope signal modelling in the quaternion set and two new opportunistic updates: magnetic angular rate update (MARU) and acceleration gradient update (AGU). MAGYQ filter performances are assessed indoors, outdoors, with dynamic and static motion conditions. The heading error, using only the inertial solution, is found to be less than 10 degrees after 1.5 km walking. The performance is also evaluated in the positioning domain with trajectories computed following a PDR strategy.
Magnetic field feature extraction and selection for indoor location estimation. User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing. Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users' smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users' explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established.
Vector graph assisted pedestrian dead reckoning using an unconstrained smartphone. The paper presents a hybrid indoor positioning solution based on a pedestrian dead reckoning (PDR) approach using built-in sensors on a smartphone. To address the challenges of flexible and complex contexts of carrying a phone while walking, a robust step detection algorithm based on motion-awareness has been proposed. Given the fact that step length is influenced by different motion states, an adaptive step length estimation algorithm based on motion recognition is developed. Heading estimation is carried out by an attitude acquisition algorithm, which contains a two-phase filter to mitigate the distortion of magnetic anomalies. In order to estimate the heading for an unconstrained smartphone, principal component analysis (PCA) of acceleration is applied to determine the offset between the orientation of smartphone and the actual heading of a pedestrian. Moreover, a particle filter with vector graph assisted particle weighting is introduced to correct the deviation in step length and heading estimation. Extensive field tests, including four contexts of carrying a phone, have been conducted in an office building to verify the performance of the proposed algorithm. Test results show that the proposed algorithm can achieve sub-meter mean error in all contexts.
GROPING: Geomagnetism and cROwdsensing Powered Indoor NaviGation Although a large number of WiFi fingerprinting based indoor localization systems have been proposed, our field experience with Google Maps Indoor (GMI), the only system available for public testing, shows that it is far from mature for indoor navigation. In this paper, we first report our field studies with GMI, as well as experiment results aiming to explain our unsatisfactory GMI experience. Then motivated by the obtained insights, we propose GROPING as a self-contained indoor navigation system independent of any infrastructural support. GROPING relies on geomagnetic fingerprints that are far more stable than WiFi fingerprints, and it exploits crowdsensing to construct floor maps rather than expecting individual venues to supply digitized maps. Based on our experiments with 20 participants in various floors of a big shopping mall, GROPING is able to deliver a sufficient accuracy for localization and thus provides smooth navigation experience.
Activity Sequence-Based Indoor Pedestrian Localization Using Smartphones This paper presents an activity sequence-based indoor pedestrian localization approach using smartphones. The activity sequence consists of several continuous activities during the walking process, such as turning at a corner, taking the elevator, taking the escalator, and walking stairs. These activities take place when a user walks at some special points in the building, like corners, elevators, escalators, and stairs. The special points form an indoor road network. In our approach, we first detect the user’s activities using the built-in sensors in a smartphone. The detected activities constitute the activity sequence. Meanwhile, the user’s trajectory is reckoned by Pedestrian Dead Reckoning (PDR). Based on the detected activity sequence and reckoned trajectory, we realize pedestrian localization by matching them to the indoor road network using a Hidden Markov Model. After encountering several special points, the location of the user would converge on the true one. We evaluate our proposed approach using smartphones in two buildings: an office building and a shopping mall. The results show that the proposed approach can realize autonomous pedestrian localization even without knowing the initial point in the environments. The mean offline localization error is about 1.3 m. The results also demonstrate that the proposed approach is robust to activity detection error and PDR estimation error.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
A survey of socially interactive robots This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
Energy Efficiency Resource Allocation For D2d Communication Network Based On Relay Selection In order to solve the problem of spectrum resource shortage and energy consumption, we put forward a new model that combines with D2D communication and energy harvesting technology: energy harvesting-aided D2D communication network under the cognitive radio (EHA-CRD), where the D2D users harvest energy from the base station and the D2D source communicate with D2D destination by D2D relays. Our goals are to investigate the maximization energy efficiency (EE) of the network by joint time allocation and relay selection while taking into the constraints of the signal-to-noise ratio of D2D and the rates of the Cellular users. During this process, the energy collection time and communication time are randomly allocated. The maximization problem of EE can be divided into two sub-problems: (1) relay selection problem; (2) time optimization problem. For the first sub-problem, we propose a weighted sum maximum algorithm to select the best relay. For the last sub-problem, the EE maximization problem is non-convex problem with time. Thus, by using fractional programming theory, we transform it into a standard convex optimization problem, and we propose the optimization iterative algorithm to solve the convex optimization problem for obtaining the optimal solution. And, the simulation results show that the proposed relay selection algorithm and time optimization algorithm are significantly improved compared with the existing algorithms.
Priced Oblivious Transfer: How to Sell Digital Goods We consider the question of protecting the privacy of customers buying digital goods. More specifically, our goal is to allow a buyer to purchase digital goods from a vendor without letting the vendor learn what, and to the extent possible also when and how much, it is buying. We propose solutions which allow the buyer, after making an initial deposit, to engage in an unlimited number of priced oblivious-transfer protocols, satisfying the following requirements: As long as the buyer's balance contains sufficient funds, it will successfully retrieve the selected item and its balance will be debited by the item's price. However, the buyer should be unable to retrieve an item whose cost exceeds its remaining balance. The vendor should learn nothing except what must inevitably be learned, namely, the amount of interaction and the initial deposit amount (which imply upper bounds on the quantity and total price of all information obtained by the buyer). In particular, the vendor should be unable to learn what the buyer's current balance is or when it actually runs out of its funds. The technical tools we develop, in the process of solving this problem, seem to be of independent interest. In particular, we present the first one-round (two-pass) protocol for oblivious transfer that does not rely on the random oracle model (a very similar protocol was independently proposed by Naor and Pinkas [21]). This protocol is a special case of a more general "conditional disclosure" methodology, which extends a previous approach from [11] and adapts it to the 2-party setting.
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Switching Stabilization for a Class of Slowly Switched Systems In this technical note, the problem of switching stabilization for slowly switched linear systems is investigated. In particular, the considered systems can be composed of all unstable subsystems. Based on the invariant subspace theory, the switching signal with mode-dependent average dwell time (MDADT) property is designed to exponentially stabilize the underlying system. Furthermore, sufficient condition of stabilization for switched systems with all stable subsystems under MDADT switching is also given. The correctness and effectiveness of the proposed approaches are illustrated by a numerical example.
Scalable and Privacy-Preserving Data Sharing Based on Blockchain. With the development of network technology and cloud computing, data sharing is becoming increasingly popular, and many scholars have conducted in-depth research to promote its flourish. As the scale of data sharing expands, its privacy protection has become a hot issue in research. Moreover, in data sharing, the data is usually maintained in multiple parties, which brings new challenges to protect the privacy of these multi-party data. In this paper, we propose a trusted data sharing scheme using blockchain. We use blockchain to prevent the shared data from being tampered, and use the Paillier cryptosystem to realize the confidentiality of the shared data. In the proposed scheme, the shared data can be traded, and the transaction information is protected by using the (p, t)-threshold Paillier cryptosystem. We conduct experiments in cloud storage scenarios and the experimental results demonstrate the efficiency and effectiveness of the proposed scheme.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.109267
0.107111
0.107111
0.075511
0.04142
0.004884
0
0
0
0
0
0
0
0
A fuzzy hierarchical operator in the grey wolf optimizer algorithm. •The main goal is to study the performance of the Grey Wolf Optimizer algorithm when a new hierarchical operator is introduced.•This new operator is basically a hierarchical transformation that is inspired in the hierarchical social pyramid of the grey wolf.•This proposed operator is applied to the simulation of the hunting process in the algorithm and has 5 variants are presented.•Notably the variants having the greatest impact in the GWO performance are based on the use fuzzy logic.
Mobile cloud computing: A survey Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for future work.
Harmony search algorithm for solving Sudoku Harmony search (HS) algorithm was applied to solving Sudoku puzzle. The HS is an evolutionary algorithm which mimics musicians' behaviors such as random play, memory-based play, and pitch-adjusted play when they perform improvisation. Sudoku puzzles in this study were formulated as an optimization problem with number-uniqueness penalties. HS could successfully solve the optimization problem after 285 function evaluations, taking 9 seconds. Also, sensitivity analysis of HS parameters was performed to obtain a better idea of algorithm parameter values.
DEC: dynamically evolving clustering and its application to structure identification of evolving fuzzy models. Identification of models from input-output data essentially requires estimation of appropriate cluster centers. In this paper, a new online evolving clustering approach for streaming data is proposed. Unlike other approaches that consider either the data density or distance from existing cluster centers, this approach uses cluster weight and distance before generating new clusters. To capture the dynamics of the data stream, the cluster weight is defined in both data and time space in such a way that it decays exponentially with time. It also applies concepts from computational geometry to determine the neighborhood information while forming clusters. A distinction is made between core and noncore clusters to effectively identify the real outliers. The approach efficiently estimates cluster centers upon which evolving Takagi-Sugeno models are developed. The experimental results with developed models show that the proposed approach attains results at par or better than existing approaches and significantly reduces the computational overhead, which makes it suitable for real-time applications.
An Easily Understandable Grey Wolf Optimizer and Its Application to Fuzzy Controller Tuning. This paper proposes an easily understandable Grey Wolf Optimizer (GWO) applied to the optimal tuning of the parameters of Takagi-Sugeno proportional-integral fuzzy controllers (T-S PI-FCs). GWO is employed for solving optimization problems focused on the minimization of discrete-time objective functions defined as the weighted sum of the absolute value of the control error and of the squared output sensitivity function, and the vector variable consists of the tuning parameters of the T-S PI-FCs. Since the sensitivity functions are introduced with respect to the parametric variations of the process, solving these optimization problems is important as it leads to fuzzy control systems with a reduced process parametric sensitivity obtained by a GWO-based fuzzy controller tuning approach. GWO algorithms applied with this regard are formulated in easily understandable terms for both vector and scalar operations, and discussions on stability, convergence, and parameter settings are offered. The controlled processes referred to in the course of this paper belong to a family of nonlinear servo systems, which are modeled by second order dynamics plus a saturation and dead zone static nonlinearity. Experimental results concerning the angular position control of a laboratory servo system are included for validating the proposed method.
Stability Analysis and Estimation of Domain of Attraction for Positive Polynomial Fuzzy Systems With Input Saturation AbstractIn this paper, the stability and positivity of positive polynomial fuzzy model based (PPFMB) control system are investigated, in which the positive polynomial fuzzy model and positive polynomial fuzzy controller are allowed to have different premise membership functions from each other. These mismatched premise membership functions can increase the flexibility of controller design; however, it will lead to the conservative results when the stability is analyzed based on the Lyapunov stability theory. To relax the positivity/stability conditions, the improved Taylor-series-membership-functions-dependent (ITSMFD) method is introduced by introducing the sample points information of Taylor-series approximate membership functions, local error information and boundary information of substate space of premise variables into the stability/positivity conditions. Meanwhile, the ITSMFD method is extended to the PPFMB control system with input saturation to relax the estimation of domain of attraction. Finally, simulation examples are presented to verify the feasibility of this method.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
Containment control of heterogeneous linear multi-agent systems In this note, we study the containment control problem of heterogeneous linear multi-agent systems based on output regulation framework. Motivated by leader-follower output regulation problems, the leaders are assumed to be exosystems. In controller design approach for each follower, we utilize a distributed dynamic state feedback control scheme. To achieve the objective of this work, we modify the conventional output regulation error in such a way that it can handle more than one leader, and we also introduce a dynamic compensator. Our work is based on a new formulation for containment error that guarantees the convergence of all follower agents to the dynamic convex hull spanned by the leaders, and also enables us to use output regulation techniques with some modifications to solve the containment problem. Finally, a numerical example is given to illustrate the validity of theoretical results.
A Tutorial On Visual Servo Control This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Traveling Salesman Problems with Profits Traveling salesman problems with profits (TSPs with profits) are a generalization of the traveling salesman problem (TSP), where it is not necessary to visit all vertices. A profit is associated with each vertex. The overall goal is the simultaneous optimization of the collected profit and the travel costs. These two optimization criteria appear either in the objective function or as a constraint. In this paper, a classification of TSPs with profits is proposed, and the existing literature is surveyed. Different classes of applications, modeling approaches, and exact or heuristic solution techniques are identified and compared. Conclusions emphasize the interest of this class of problems, with respect to applications as well as theoretical results.
On (k, n)*-visual cryptography scheme Let P = {1, 2, . . . , n} be a set of elements called participants. In this paper we construct a visual cryptography scheme (VCS) for the strong access structure specified by the set Γ0 of all minimal qualified sets, where $${\Gamma_0=\{S: S\subseteq P, 1\in S}$$ and |S| = k}. Any VCS for this strong access structure is called a (k, n)*-VCS. We also obtain bounds for the optimal pixel expansion and optimal relative contrast for a (k, n)*-VCS.
GASPAD: A General and Efficient mm-Wave Integrated Circuit Synthesis Method Based on Surrogate Model Assisted Evolutionary Algorithm The design and optimization (both sizing and layout) of mm-wave integrated circuits (ICs) have attracted much attention due to the growing demand in industry. However, available manual design and synthesis methods suffer from a high dependence on design experience, being inefficient or not general enough. To address this problem, a new method, called general mm-wave IC synthesis based on Gaussian process model assisted differential evolution (GASPAD), is proposed in this paper. A medium-scale computationally expensive constrained optimization problem must be solved for the targeted mm-wave IC design problem. Besides the basic techniques of using a global optimization algorithm to obtain highly optimized design solutions and using surrogate models to obtain a high efficiency, a surrogate model-aware search mechanism (SMAS) for tackling the several tens of design variables (medium scale) and a method to appropriately integrate constraint handling techniques into SMAS for tackling the multiple (high-) performance specifications are proposed. Experiments on two 60 GHz power amplifiers in a 65 nm CMOS technology and two mathematical benchmark problems are carried out. Comparisons with the state-of-art provide evidence of the important advantages of GASPAD in terms of solution quality and efficiency.
Learning Discriminative Features with Multiple Granularities for Person Re-Identification. The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1/mAP=96.6%/94.2% with this method after re-ranking.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.11
0.12
0.12
0.12
0.12
0.12
0.06
0.011667
0
0
0
0
0
0
Rate Allocation and Network Lifetime Problems for Wireless Sensor Networks An important performance consideration for wireless sensor networks is the amount of information collected by all the nodes in the network over the course of network lifetime. Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate allocation among the nodes, we advocate the use of lexicographical max-min (LMM) rate allocation. To calculate the LMM rate allocation vector, we develop a polynomial-time algorithm by exploiting the parametric analysis (PA) technique from linear program (LP), which we call serial LP with parametric analysis (SLP-PA). We show that the SLP-PA can be also employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in the literature. More important, we show that there exists an elegant duality relationship between the LMM rate allocation problem and the LMM node lifetime problem. Therefore, it is sufficient to solve only one of the two problems. Important insights can be obtained by inferring duality results for the other problem.
Mobility in wireless sensor networks - Survey and proposal. Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time.
Tag-based cooperative data gathering and energy recharging in wide area RFID sensor networks The Wireless Identification and Sensing Platform (WISP) conjugates the identification potential of the RFID technology and the sensing and computing capability of the wireless sensors. Practical issues, such as the need of periodically recharging WISPs, challenge the effective deployment of large-scale RFID sensor networks (RSNs) consisting of RFID readers and WISP nodes. In this view, the paper proposes cooperative solutions to energize the WISP devices in a wide-area sensing network while reducing the data collection delay. The main novelty is the fact that both data transmissions and energy transfer are based on the RFID technology only: RFID mobile readers gather data from the WISP devices, wirelessly recharge them, and mutually cooperate to reduce the data delivery delay to the sink. Communication between mobile readers relies on two proposed solutions: a tag-based relay scheme, where RFID tags are exploited to temporarily store sensed data at pre-determined contact points between the readers; and a tag-based data channel scheme, where the WISPs are used as a virtual communication channel for real time data transfer between the readers. Both solutions require: (i) clustering the WISP nodes; (ii) dimensioning the number of required RFID mobile readers; (iii) planning the tour of the readers under the energy and time constraints of the nodes. A simulative analysis demonstrates the effectiveness of the proposed solutions when compared to non-cooperative approaches. Differently from classic schemes in the literature, the solutions proposed in this paper better cope with scalability issues, which is of utmost importance for wide area networks.
Improving charging capacity for wireless sensor networks by deploying one mobile vehicle with multiple removable chargers. Wireless energy transfer is a promising technology to prolong the lifetime of wireless sensor networks (WSNs), by employing charging vehicles to replenish energy to lifetime-critical sensors. Existing studies on sensor charging assumed that one or multiple charging vehicles being deployed. Such an assumption may have its limitation for a real sensor network. On one hand, it usually is insufficient to employ just one vehicle to charge many sensors in a large-scale sensor network due to the limited charging capacity of the vehicle or energy expirations of some sensors prior to the arrival of the charging vehicle. On the other hand, although the employment of multiple vehicles can significantly improve the charging capability, it is too costly in terms of the initial investment and maintenance costs on these vehicles. In this paper, we propose a novel charging model that a charging vehicle can carry multiple low-cost removable chargers and each charger is powered by a portable high-volume battery. When there are energy-critical sensors to be charged, the vehicle can carry the chargers to charge multiple sensors simultaneously, by placing one portable charger in the vicinity of one sensor. Under this novel charging model, we study the scheduling problem of the charging vehicle so that both the dead duration of sensors and the total travel distance of the mobile vehicle per tour are minimized. Since this problem is NP-hard, we instead propose a (3+ϵ)-approximation algorithm if the residual lifetime of each sensor can be ignored; otherwise, we devise a novel heuristic algorithm, where ϵ is a given constant with 0 < ϵ ≤ 1. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Experimental results show that the performance of the proposed algorithms are very promising.
Speed control of mobile chargers serving wireless rechargeable networks. Wireless rechargeable networks have attracted increasing research attention in recent years. For charging service, a mobile charger is often employed to move across the network and charge all network nodes. To reduce the charging completion time, most existing works have used the “move-then-charge” model where the charger first moves to specific spots and then starts charging nodes nearby. As a result, these works often aim to reduce the moving delay or charging delay at the spots. However, the charging opportunity on the move is largely overlooked because the charger can charge network nodes while moving, which as we analyze in this paper, has the potential to greatly reduce the charging completion time. The major challenge to exploit the charging opportunity is the setting of the moving speed of the charger. When the charger moves slow, the charging delay will be reduced (more energy will be charged during the movement) but the moving delay will increase. To deal with this challenge, we formulate the problem of delay minimization as a Traveling Salesman Problem with Speed Variations (TSP-SV) which jointly considers both charging and moving delay. We further solve the problem using linear programming to generate (1) the moving path of the charger, (2) the moving speed variations on the path and (3) the stay time at each charging spot. We also discuss possible ways to reduce the calculation complexity. Extensive simulation experiments are conducted to study the delay performance under various scenarios. The results demonstrate that our proposed method achieves much less completion time compared to the state-of-the-art work.
A Prediction-Based Charging Policy and Interference Mitigation Approach in the Wireless Powered Internet of Things The Internet of Things (IoT) technology has recently drawn more attention due to its ability to achieve the interconnections of massive physic devices. However, how to provide a reliable power supply to energy-constrained devices and improve the energy efficiency in the wireless powered IoT (WP-IoT) is a twofold challenge. In this paper, we develop a novel wireless power transmission (WPT) system, where an unmanned aerial vehicle (UAV) equipped with radio frequency energy transmitter charges the IoT devices. A machine learning framework of echo state networks together with an improved <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -means clustering algorithm is used to predict the energy consumption and cluster all the sensor nodes at the next period, thus automatically determining the charging strategy. The energy obtained from the UAV by WPT supports the IoT devices to communicate with each other. In order to improve the energy efficiency of the WP-IoT system, the interference mitigation problem is modeled as a mean field game, where an optimal power control policy is presented to adapt and analyze the large number of sensor nodes randomly deployed in WP-IoT. The numerical results verify that our proposed dynamic charging policy effectively reduces the data packet loss rate, and that the optimal power control policy greatly mitigates the interference, and improve the energy efficiency of the whole network.
Design of Self-sustainable Wireless Sensor Networks with Energy Harvesting and Wireless Charging AbstractEnergy provisioning plays a key role in the sustainable operations of Wireless Sensor Networks (WSNs). Recent efforts deploy multi-source energy harvesting sensors to utilize ambient energy. Meanwhile, wireless charging is a reliable energy source not affected by spatial-temporal ambient dynamics. This article integrates multiple energy provisioning strategies and adaptive adjustment to accomplish self-sustainability under complex weather conditions. We design and optimize a three-tier framework with the first two tiers focusing on the planning problems of sensors with various types and distributed energy storage powered by environmental energy. Then we schedule the Mobile Chargers (MC) between different charging activities and propose an efficient, 4-factor approximation algorithm. Finally, we adaptively adjust the algorithms to capture real-time energy profiles and jointly optimize those correlated modules. Our extensive simulations demonstrate significant improvement of network lifetime (\(\)), increase of harvested energy (15%), reduction of network cost (30%), and the charging capability of MC by 100%.
Multi-Node Wireless Energy Charging in Sensor Networks Wireless energy transfer based on magnetic resonant coupling is a promising technology to replenish energy to a wireless sensor network (WSN). However, charging sensor nodes one at a time poses a serious scalability problem. Recent advances in magnetic resonant coupling show that multiple nodes can be charged at the same time. In this paper, we exploit this multi-node wireless energy transfer technology and investigate whether it is a scalable technology to address energy issues in a WSN. We consider a wireless charging vehicle (WCV) periodically traveling inside a WSN and charging sensor nodes wirelessly. Based on charging range of the WCV, we propose a cellular structure that partitions the two-dimensional plane into adjacent hexagonal cells. We pursue a formal optimization framework by jointly optimizing traveling path, flow routing, and charging time. By employing discretization and a novel Reformulation-Linearization Technique (RLT), we develop a provably near-optimal solution for any desired level of accuracy. Through numerical results, we demonstrate that our solution can indeed address the charging scalability problem in a WSN.
Making Sensor Networks Immortal: An Energy-Renewal Approach With Wireless Power Transfer Wireless sensor networks are constrained by limited battery energy. Thus, finite network lifetime is widely regarded as a fundamental performance bottleneck. Recent breakthrough in the area of wireless power transfer offers the potential of removing this performance bottleneck, i.e., allowing a sensor network to remain operational forever. In this paper, we investigate the operation of a sensor network under this new enabling energy transfer technology. We consider the scenario of a mobile charging vehicle periodically traveling inside the sensor network and charging each sensor node's battery wirelessly. We introduce the concept of renewable energy cycle and offer both necessary and sufficient conditions. We study an optimization problem, with the objective of maximizing the ratio of the wireless charging vehicle (WCV)'s vacation time over the cycle time. For this problem, we prove that the optimal traveling path for the WCV is the shortest Hamiltonian cycle and provide a number of important properties. Subsequently, we develop a near-optimal solution by a piecewise linear approximation technique and prove its performance guarantee.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Revisiting traffic anomaly detection using software defined networking Despite their exponential growth, home and small office/home office networks continue to be poorly managed. Consequently, security of hosts in most home networks is easily compromised and these hosts are in turn used for largescale malicious activities without the home users' knowledge. We argue that the advent of Software Defined Networking (SDN) provides a unique opportunity to effectively detect and contain network security problems in home and home office networks. We show how four prominent traffic anomaly detection algorithms can be implemented in an SDN context using Openflow compliant switches and NOX as a controller. Our experiments indicate that these algorithms are significantly more accurate in identifying malicious activities in the home networks as compared to the ISP. Furthermore, the efficiency analysis of our SDN implementations on a programmable home network router indicates that the anomaly detectors can operate at line rates without introducing any performance penalties for the home network traffic.
Visual Analysis of Eye State and Head Pose for Driver Alertness Monitoring. This paper presents visual analysis of eye state and head pose (HP) for continuous monitoring of alertness of a vehicle driver. Most existing approaches to visual detection of nonalert driving patterns rely either on eye closure or head nodding angles to determine the driver drowsiness or distraction level. The proposed scheme uses visual features such as eye index (EI), pupil activity (PA), and HP to extract critical information on nonalertness of a vehicle driver. EI determines if the eye is open, half closed, or closed from the ratio of pupil height and eye height. PA measures the rate of deviation of the pupil center from the eye center over a time period. HP finds the amount of the driver's head movements by counting the number of video segments that involve a large deviation of three Euler angles of HP, i.e., nodding, shaking, and tilting, from its normal driving position. HP provides useful information on the lack of attention, particularly when the driver's eyes are not visible due to occlusion caused by large head movements. A support vector machine (SVM) classifies a sequence of video segments into alert or nonalert driving events. Experimental results show that the proposed scheme offers high classification accuracy with acceptably low errors and false alarms for people of various ethnicity and gender in real road driving conditions.
DeepTest: automated testing of deep-neural-network-driven autonomous cars Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads. However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner-case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases. In this paper, we design, implement, and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.
Methods for Flexible Management of Blockchain-Based Cryptocurrencies in Electricity Markets and Smart Grids The growing trend in the use of blockchain-based cryptocurrencies in modern communities provides several advantages, but also imposes several challenges to energy markets and power systems, in general. This paper aims at providing recommendations for efficient use of digital cryptocurrencies in today's and future smart power systems, in order to face the challenging aspects of this new technology. In this paper, existing issues and challenges of smart grids in the presence of blockchain-based cryptocurrencies are presented and some innovative approaches for efficient integration and management of blockchain-based cryptocurrencies in smart grids are proposed. Also some recommendations are given for improving the smart grids performance in the presence of digital cryptocurrencies and some future research directions are highlighted.
1.030564
0.028571
0.028571
0.028571
0.028571
0.028571
0.028571
0.015123
0.005756
0
0
0
0
0
On Network Topology Augmentation for Global Connectivity under Regional Failures Several recent studies shed light on the vulnerability of networks against regional failures, which are failures of multiple nodes and links in a physical region due to a natural disaster. The paper defines a novel design framework, called Geometric Network Augmentation (GNA), which determines a set of node pairs and the new cable routes to be deployed between each of them to make the network always remain connected when a regional failure of a given size occurs. With the proposed GNA design framework, we provide mathematical analysis and efficient heuristic algorithms that are built on the latest computational geometry tools and combinatorial optimization techniques. Through extensive simulation, we demonstrate that augmentation with just a small number of new cable routes will achieve the desired resilience against all the considered regional failures.
Diverse Routing in Networks With Probabilistic Failures We develop diverse routing schemes for dealing with multiple, possibly correlated, failures. While disjoint path protection can effectively deal with isolated single link failures, recovering from multiple failures is not guaranteed. In particular, events such as natural disasters or intentional attacks can lead to multiple correlated failures, for which recovery mechanisms are not well understood. We take a probabilistic view of network failures where multiple failure events can occur simultaneously, and develop algorithms for finding diverse routes with minimum joint failure probability. Moreover, we develop a novel Probabilistic Shared Risk Link Group (PSRLG) framework for modeling correlated failures. In this context, we formulate the problem of finding two paths with minimum joint failure probability as an integer nonlinear program (INLP) and develop approximations and linear relaxations that can find nearly optimal solutions in most cases.
On the structure and complexity of the 2-connected Steiner network problem in the plane We consider the problem of finding a minimum Euclidean length graph 2-connecting a set of points in the plane. We show that the solution to this problem is an edge-disjoint union of full Steiner trees. This has three important corollaries. The first is a proof that the problem is NP-hard, even in the sense of finding a fully polynomial approximation scheme. The second is a complete description of the solutions for 2SNPP for rectangular arrays of lattice points. The third is a linear-time algorithm for constructing an optimal solution to 2SNPP given its topological description.
Geographical route design of physical networks using earthquake risk information. In this article, we investigate the challenges in designing a communication network robust against earthquake-induced disasters. Due to the heterogeneity of local geology conditions, an earthquake may cause different devastating effects on network links at different locations. Therefore, the aim of this research was to develop a network design method that, based on actual seismic hazard informatio...
Network virtualization for disaster resilience of cloud services Today's businesses and consumer applications are becoming increasingly dependent on cloud solutions, making them vulnerable to service outages that can result in a loss of communication or access to business-critical services and data. Are we really prepared for such failure scenarios? Given that failures can occur on both the network and data center sides, is it possible to have efficient end-to-end recovery? The answer is mostly negative due to the separate operation of these domains. This article offers a solution to this problem based on network virtualization, and discusses the necessary architecture and algorithm details. It also answers the question of whether it is better to provide resilience in the virtual or physical layer from a cost effectiveness and failure coverage perspective.
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Massive MIMO for next generation wireless systems Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Communication theory of secrecy systems THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.1 In this paper a theory of secrecy systems is developed. The approach is on a theoretical level and is intended to complement the treatment found in standard works on cryptography.2 There, a detailed study is made of the many standard types of codes and ciphers, and of the ways of breaking them. We will be more concerned with the general mathematical structure and properties of secrecy systems.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Labels and event processes in the Asbestos operating system Asbestos, a new operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestos's kernel-enforced labels, including controls on interprocess communication and system-wide information flow. A new event process abstraction defines lightweight, isolated contexts within a single process, allowing one process to act on behalf of multiple users while preventing it from leaking any single user's data to others. A Web server demonstration application uses these primitives to isolate private user data. Since the untrusted workers that respond to client requests are constrained by labels, exploited workers cannot directly expose user data except as allowed by application policy. The server application requires 1.4 memory pages per user for up to 145,000 users and achieves connection rates similar to Apache, demonstrating that additional security can come at an acceptable cost.
Adaptive Consensus Control for a Class of Nonlinear Multiagent Time-Delay Systems Using Neural Networks Because of the complicity of consensus control of nonlinear multiagent systems in state time-delay, most of previous works focused only on linear systems with input time-delay. An adaptive neural network (NN) consensus control method for a class of nonlinear multiagent systems with state time-delay is proposed in this paper. The approximation property of radial basis function neural networks (RBFNNs) is used to neutralize the uncertain nonlinear dynamics in agents. An appropriate Lyapunov-Krasovskii functional, which is obtained from the derivative of an appropriate Lyapunov function, is used to compensate the uncertainties of unknown time delays. It is proved that our proposed approach guarantees the convergence on the basis of Lyapunov stability theory. The simulation results of a nonlinear multiagent time-delay system and a multiple collaborative manipulators system show the effectiveness of the proposed consensus control algorithm.
Surrogate-assisted hierarchical particle swarm optimization. Meta-heuristic algorithms, which require a large number of fitness evaluations before locating the global optimum, are often prevented from being applied to computationally expensive real-world problems where one fitness evaluation may take from minutes to hours, or even days. Although many surrogate-assisted meta-heuristic optimization algorithms have been proposed, most of them were developed for solving expensive problems up to 30 dimensions. In this paper, we propose a surrogate-assisted hierarchical particle swarm optimizer for high-dimensional problems consisting of a standard particle swarm optimization (PSO) algorithm and a social learning particle swarm optimization algorithm (SL-PSO), where the PSO and SL-PSO work together to explore and exploit the search space, and simultaneously enhance the global and local performance of the surrogate model. Our experimental results on seven benchmark functions of dimensions 30, 50 and 100 demonstrate that the proposed method is competitive compared with the state-of-the-art algorithms under a limited computational budget.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
0
Energy- and Spectral-Efficiency Tradeoff for Distributed Antenna Systems with Proportional Fairness Energy efficiency(EE) has caught more and more attention in future wireless communications due to steadily rising energy costs and environmental concerns. In this paper, we propose an EE scheme with proportional fairness for the downlink multiuser distributed antenna systems (DAS). Our aim is to maximize EE, subject to constraints on overall transmit power of each remote access unit (RAU), bit-error rate (BER), and proportional data rates. We exploit multi-criteria optimization method to systematically investigate the relationship between EE and spectral efficiency (SE). Using the weighted sum method, we first convert the multi-criteria optimization problem, which is extremely complex, into a simpler single objective optimization problem. Then an optimal algorithm is developed to allocate the available power to balance the tradeoff between EE and SE. We also demonstrate the effectiveness of the proposed scheme and illustrate the fundamental tradeoff between energy- and spectral-efficient transmission through computer simulation.
Efficient multi-task allocation and path planning for unmanned surface vehicle in support of ocean operations. Presently, there is an increasing interest in the deployment of unmanned surface vehicles (USVs) to support complex ocean operations. In order to carry out these missions in a more efficient way, an intelligent hybrid multi-task allocation and path planning algorithm is required and has been proposed in this paper. In terms of the multi-task allocation, a novel algorithm based upon a self-organising map (SOM) has been designed and developed. The main contribution is that an adaptive artificial repulsive force field has been constructed and integrated into the SOM to achieve collision avoidance capability. The new algorithm is able to fast and effectively generate a sequence for executing multiple tasks in a cluttered maritime environment involving numerous obstacles. After generating an optimised task execution sequence, a path planning algorithm based upon fast marching square (FMS) is utilised to calculate the trajectories. Because of the introduction of a safety parameter, the FMS is able to adaptively adjust the dimensional influence of an obstacle and accordingly generate the paths to ensure the safety of the USV. The algorithms have been verified and evaluated through a number of computer based simulations and has been proven to work effectively in both simulated and practical maritime environments. (C) 2017 Elsevier B.V. All rights reserved.
Multi-user Multi-task Offloading and Resource Allocation in Mobile Cloud Systems. We consider a general multi-user mobile cloud computing (MCC) system where each mobile user has multiple independent tasks. These mobile users share the computation and communication resources while offloading tasks to the cloud. We study both the conventional MCC where tasks are offloaded to the cloud through a wireless access point, and MCC with a computing access point (CAP), where the CAP serv...
Joint Flight Cruise Control and Data Collection in UAV-aided Internet of Things: An Onboard Deep Reinforcement Learning Approach Employing unmanned aerial vehicles (UAVs) as aerial data collectors in Internet-of-Things (IoT) networks is a promising technology for large-scale environment sensing. A key challenge in UAV-aided data collection is that UAV maneuvering gives rise to buffer overflow at the IoT node and unsuccessful transmission due to lossy airborne channels. This article formulates a joint optimization of flight ...
Disturbance Compensating Model Predictive Control With Application to Ship Heading Control. To address the constraint violation and feasibility issues of model predictive control (MPC) for ship heading control in wave fields, a novel disturbance compensating MPC (DC-MPC) algorithm has been proposed to satisfy the state constraints in the presence of environmental disturbances. The capability of the novel DC-MPC algorithm is first analyzed. Then, the proposed DC-MPC algorithm is applied to solve the ship heading control problem, and its performance is compared with a modified MPC controller, which considers the estimated disturbance in the optimization directly. The simulation results show good performance of the proposed controller in terms of reducing heading error and satisfying yaw velocity and actuator saturation constraints. The DC-MPC algorithm has the potential to be applied to other motion control problems with environmental disturbances, such as flight, automobile, and robotics control.
MagicNet: The Maritime Giant Cellular Network Recently, the development of marine industries has increasingly attracted attention from all over the world. A wide-area and seamless maritime communication network has become a critical supporting approach. In this article, we propose a novel architecture named the maritime giant cellular network (MagicNet) relying on seaborne floating towers deployed in a honeycomb topology. The tower-borne gian...
Hamming Embedding and Weak Geometric Consistency for Large Scale Image Search This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
The Whale Optimization Algorithm. The Whale Optimization Algorithm inspired by humpback whales is proposed.The WOA algorithm is benchmarked on 29 well-known test functions.The results on the unimodal functions show the superior exploitation of WOA.The exploration ability of WOA is confirmed by the results on multimodal functions.The results on structural design problems confirm the performance of WOA in practice. This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html
Collaborative privacy management The landscape of the World Wide Web with all its versatile services heavily relies on the disclosure of private user information. Unfortunately, the growing amount of personal data collected by service providers poses a significant privacy threat for Internet users. Targeting growing privacy concerns of users, privacy-enhancing technologies emerged. One goal of these technologies is the provision of tools that facilitate a more informative decision about personal data disclosures. A famous PET representative is the PRIME project that aims for a holistic privacy-enhancing identity management system. However, approaches like the PRIME privacy architecture require service providers to change their server infrastructure and add specific privacy-enhancing components. In the near future, service providers are not expected to alter internal processes. Addressing the dependency on service providers, this paper introduces a user-centric privacy architecture that enables the provider-independent protection of personal data. A central component of the proposed privacy infrastructure is an online privacy community, which facilitates the open exchange of privacy-related information about service providers. We characterize the benefits and the potentials of our proposed solution and evaluate a prototypical implementation.
Data-Driven Intelligent Transportation Systems: A Survey For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.
Completely Pinpointing the Missing RFID Tags in a Time-Efficient Way Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.
Modeling taxi driver anticipatory behavior. As part of a wider behavioral agent-based model that simulates taxi drivers' dynamic passenger-finding behavior under uncertainty, we present a model of strategic behavior of taxi drivers in anticipation of substantial time varying demand at locations such as airports and major train stations. The model assumes that, considering a particular decision horizon, a taxi driver decides to transfer to such a destination based on a reward function. The dynamic uncertainty of demand is captured by a time dependent pick-up probability, which is a cumulative distribution function of waiting time. The model allows for information learning by which taxi drivers update their beliefs from past experiences. A simulation on a real road network, applied to test the model, indicates that the formulated model dynamically improves passenger-finding strategies at the airport. Taxi drivers learn when to transfer to the airport in anticipation of the time-varying demand at the airport to minimize their waiting time.
Hardware Circuits Design and Performance Evaluation of a Soft Lower Limb Exoskeleton Soft lower limb exoskeletons (LLEs) are wearable devices that have good potential in walking rehabilitation and augmentation. While a few studies focused on the structure design and assistance force optimization of the soft LLEs, rarely work has been conducted on the hardware circuits design. The main purpose of this work is to present a new soft LLE for walking efficiency improvement and introduce its hardware circuits design. A soft LLE for hip flexion assistance and a hardware circuits system with scalability were proposed. To assess the efficacy of the soft LLE, the experimental tests that evaluate the sensor data acquisition, force tracking performance, lower limb muscle activity and metabolic cost were conducted. The time error in the peak assistance force was just 1%. The reduction in the normalized root-mean-square EMG of the rectus femoris was 7.1%. The net metabolic cost in exoskeleton on condition was reduced by 7.8% relative to walking with no exoskeleton. The results show that the designed hardware circuits can be applied to the soft LLE and the soft LLE is able to improve walking efficiency of wearers.
1.2
0.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
Towards Empathic Virtual and Robotic Tutors.
Tablet use in schools: a critical review of the evidence for learning outcomes The increased popularity of tablets in general has led to uptake in education. We critically review the literature reporting use of tablets by primary and secondary school children across the curriculum, with a particular emphasis on learning outcomes. The systematic review methodology was used, and our literature search resulted in 33 relevant studies meeting the inclusion criteria. A total of 23 met the minimum quality criteria and were examined in detail 16 reporting positive learning outcomes, 5 no difference and 2 negative learning outcomes. Explanations underlying these observations were analysed, and factors contributing to successful uses of tablets are discussed. While we hypothesize how tablets can viably support children in completing a variety of learning tasks across a range of contexts and academic subjects, the fragmented nature of the current knowledge base, and the scarcity of rigorous studies, makes it difficult to draw firm conclusions. The generalizability of evidence is limited, and detailed explanations as to how, or why, using tablets within certain activities can improve learning remain elusive. We recommend that future research moves beyond exploration towards systematic and in-depth investigations building on the existing findings documented here.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
People respond better to robots than computer tablets delivering healthcare instructions. •We compared responses to a robot with responses to a computer in a health context.•Participants spoke and smiled more towards the robot than the tablet.•Participants were more likely to follow relaxation instructions when given by the robot.•The robot was rated more favourably and more likeable than the tablet.•Robots can offer advantages over a computer tablet in healthcare.
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Vision meets robotics: The KITTI dataset We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Adapting visual category models to new domains Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Adaptive Consensus Control for a Class of Nonlinear Multiagent Time-Delay Systems Using Neural Networks Because of the complicity of consensus control of nonlinear multiagent systems in state time-delay, most of previous works focused only on linear systems with input time-delay. An adaptive neural network (NN) consensus control method for a class of nonlinear multiagent systems with state time-delay is proposed in this paper. The approximation property of radial basis function neural networks (RBFNNs) is used to neutralize the uncertain nonlinear dynamics in agents. An appropriate Lyapunov-Krasovskii functional, which is obtained from the derivative of an appropriate Lyapunov function, is used to compensate the uncertainties of unknown time delays. It is proved that our proposed approach guarantees the convergence on the basis of Lyapunov stability theory. The simulation results of a nonlinear multiagent time-delay system and a multiple collaborative manipulators system show the effectiveness of the proposed consensus control algorithm.
An efficient scheduling scheme for mobile charger in on-demand wireless rechargeable sensor networks. Existing studies on wireless sensor networks (WSNs) have revealed that the limited battery capacity of sensor nodes (SNs) hinders their perpetual operation. Recent findings in the domain of wireless energy transfer (WET) have attracted a lot of attention of academia and industry to cater the lack of energy in the WSNs. The main idea of WET is to restore the energy of SNs using one or more wireless mobile chargers (MCs), which leads to a new paradigm of wireless rechargeable sensor networks (WRSNs). The determination of an optimal order of charging the SNs (i.e., charging schedule) in an on-demand WRSN is a well-known NP-hard problem. Moreover, care must be taken while designing the charging schedule of an MC as requesting SNs introduce both spatial and temporal constraints. In this paper, we first present a Linear Programming (LP) formulation for the problem of scheduling an MC and then propose an efficient solution based on gravitational search algorithm (GSA). Our method is presented with a novel agent representation scheme and an efficient fitness function. We perform extensive simulations on the proposed scheme to demonstrate its effectiveness over two state-of-the-art algorithms, namely first come first serve (FCFS) and nearest job next with preemption (NJNP). The simulation results reveal that the proposed scheme outperforms both the existing algorithms in terms of charging latency. The virtue of our scheme is also proved by the well-known statistical test, analysis of variance (ANOVA), followed by post hoc analysis.
A Hierarchical Architecture Using Biased Min-Consensus for USV Path Planning This paper proposes a hierarchical architecture using the biased min-consensus (BMC) method, to solve the path planning problem of unmanned surface vessel (USV). We take the fixed-point monitoring mission as an example, where a series of intermediate monitoring points should be visited once by USV. The whole framework incorporates the low-level layer planning the standard path between any two intermediate points, and the high-level fashion determining their visiting sequence. First, the optimal standard path in terms of voyage time and risk measure is planned by the BMC protocol, given that the corresponding graph is constructed with node state and edge weight. The USV will avoid obstacles or keep a certain distance safely, and arrive at the target point quickly. It is proven theoretically that the state of the graph will converge to be stable after finite iterations, i.e., the optimal solution can be found by BMC with low calculation complexity. Second, by incorporating the constraint of intermediate points, their visiting sequence is optimized by BMC again with the reconstruction of a new virtual graph based on the former planned results. The extensive simulation results in various scenarios also validate the feasibility and effectiveness of our method for autonomous navigation.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Improving Dendritic Neuron Model With Dynamic Scale-Free Network-Based Differential Evolution Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification, prediction, and other problems when its parameters are well-tuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and e...
A study on the use of statistical tests for experimentation with neural networks: Analysis of parametric test conditions and non-parametric tests In this paper, we focus on the experimental analysis on the performance in artificial neural networks with the use of statistical tests on the classification task. Particularly, we have studied whether the sample of results from multiple trials obtained by conventional artificial neural networks and support vector machines checks the necessary conditions for being analyzed through parametrical tests. The study is conducted by considering three possibilities on classification experiments: random variation in the selection of test data, the selection of training data and internal randomness in the learning algorithm. The results obtained state that the fulfillment of these conditions are problem-dependent and indefinite, which justifies the need of using non-parametric statistics in the experimental analysis.
A Multi-Layered Immune System For Graph Planarization Problem This paper presents a new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems. The proposed methodology is composed of five layers. After expressing the problem as, a suitable representation in the first layer, the search space and the features of the problem are estimated and extracted in the second and third layers, respectively. Through taking advantage of the minimized search space from estimation and the heuristic information from extraction, the antibodies (or solutions) are evolved in the fourth layer and finally the fittest antibody is exported. In order to demonstrate the efficiency of the proposed system, the graph planarization problem is tested. Simulation results based on several benchmark instances show that the proposed algorithm performs better than traditional algorithms.
From evolutionary computation to the evolution of things Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.
Implementing a GPU-based parallel MAX-MIN Ant System The MAX–MIN Ant System (MMAS) is one of the best-known Ant Colony Optimization (ACO) algorithms proven to be efficient at finding satisfactory solutions to many difficult combinatorial optimization problems. The slow-down in Moore’s law, and the availability of graphics processing units (GPUs) capable of conducting general-purpose computations at high speed, has sparked considerable research efforts into the development of GPU-based ACO implementations. In this paper, we discuss a range of novel ideas for improving the GPU-based parallel MMAS implementation, allowing it to better utilize the computing power offered by two subsequent Nvidia GPU architectures. Specifically, based on the weighted reservoir sampling algorithm we propose a novel parallel implementation of the node selection procedure, which is at the heart of the MMAS and other ACO algorithms. We also present a memory-efficient implementation of another key-component – the tabu list structure – which is used in the ACO’s solution construction stage. The proposed implementations, combined with the existing approaches, lead to a total of six MMAS variants, which are evaluated on a set of Traveling Salesman Problem (TSP) instances ranging from 198 to 3795 cities. The results show that our MMAS implementation is competitive with state-of-the-art GPU-based and multi-core CPU-based parallel ACO implementations: in fact, the times obtained for the Nvidia V100 Volta GPU were up to 7.18x and 21.79x smaller, respectively. The fastest of the proposed MMAS variants is able to generate over 1 million candidate solutions per second when solving a 1002-city instance. Moreover, we show that, combined with the 2-opt local search heuristic, the proposed parallel MMAS finds high-quality solutions for the TSP instances with up to 18,512 nodes.
Recent Advances in Evolutionary Computation Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of “biological evolution” toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily “evolutionary”. This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
Energy-Optimized Partial Computation Offloading in Mobile-Edge Computing With Genetic Simulated-Annealing-Based Particle Swarm Optimization Smart mobile devices (SMDs) can meet users' high expectations by executing computational intensive applications but they only have limited resources, including CPU, memory, battery power, and wireless medium. To tackle this limitation, partial computation offloading can be used as a promising method to schedule some tasks of applications from resource-limited SMDs to high-performance edge servers. However, it brings communication overhead issues caused by limited bandwidth and inevitably increases the latency of tasks offloaded to edge servers. Therefore, it is highly challenging to achieve a balance between high-resource consumption in SMDs and high communication cost for providing energy-efficient and latency-low services to users. This work proposes a partial computation offloading method to minimize the total energy consumed by SMDs and edge servers by jointly optimizing the offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels, and transmission power of each SMD in each time slot. It jointly considers the execution time of tasks performed in SMDs and edge servers, and transmission time of data. It also jointly considers latency limits, CPU speeds, transmission power limits, available energy of SMDs, and the maximum number of CPU cycles and memories in edge servers. Considering these factors, a nonlinear constrained optimization problem is formulated and solved by a novel hybrid metaheuristic algorithm named genetic simulated annealing-based particle swarm optimization (GSP) to produce a close-to-optimal solution. GSP achieves joint optimization of computation offloading between a cloud data center and the edge, and resource allocation in the data center. Real-life data-based experimental results prove that it achieves lower energy consumption in less convergence time than its three typical peers.
Computer intrusion detection through EWMA for autocorrelated and uncorrelated data Reliability and quality of service from information systems has been threatened by cyber intrusions. To protect information systems from intrusions and thus assure reliability and quality of service, it is highly desirable to develop techniques that detect intrusions. Many intrusions manifest in anomalous changes in intensity of events occurring in information systems. In this study, we apply, tes...
Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems An efficient optimization method called 'Teaching-Learning-Based Optimization (TLBO)' is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods.
Understanding Taxi Service Strategies From Taxi GPS Traces Taxi service strategies, as the crowd intelligence of massive taxi drivers, are hidden in their historical time-stamped GPS traces. Mining GPS traces to understand the service strategies of skilled taxi drivers can benefit the drivers themselves, passengers, and city planners in a number of ways. This paper intends to uncover the efficient and inefficient taxi service strategies based on a large-scale GPS historical database of approximately 7600 taxis over one year in a city in China. First, we separate the GPS traces of individual taxi drivers and link them with the revenue generated. Second, we investigate the taxi service strategies from three perspectives, namely, passenger-searching strategies, passenger-delivery strategies, and service-region preference. Finally, we represent the taxi service strategies with a feature matrix and evaluate the correlation between service strategies and revenue, informing which strategies are efficient or inefficient. We predict the revenue of taxi drivers based on their strategies and achieve a prediction residual as less as 2.35 RMB/h,1 which demonstrates that the extracted taxi service strategies with our proposed approach well characterize the driving behavior and performance of taxi drivers.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.1
0.033333
0
0
0
0
0
0
0
Crowd sensing of traffic anomalies based on human mobility and social media The advances in mobile computing and social networking services enable people to probe the dynamics of a city. In this paper, we address the problem of detecting and describing traffic anomalies using crowd sensing with two forms of data, human mobility and social media. Traffic anomalies are caused by accidents, control, protests, sport events, celebrations, disasters and other events. Unlike existing traffic-anomaly-detection methods, we identify anomalies according to drivers' routing behavior on an urban road network. Here, a detected anomaly is represented by a sub-graph of a road network where drivers' routing behaviors significantly differ from their original patterns. We then try to describe the detected anomaly by mining representative terms from the social media that people posted when the anomaly happened. The system for detecting such traffic anomalies can benefit both drivers and transportation authorities, e.g., by notifying drivers approaching an anomaly and suggesting alternative routes, as well as supporting traffic jam diagnosis and dispersal. We evaluate our system with a GPS trajectory dataset generated by over 30,000 taxicabs over a period of 3 months in Beijing, and a dataset of tweets collected from WeiBo, a Twitter-like social site in China. The results demonstrate the effectiveness and efficiency of our system.
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Dest-ResNet: A Deep Spatiotemporal Residual Network for Hotspot Traffic Speed Prediction. With the ever-increasing urbanization process, the traffic jam has become a common problem in the metropolises around the world, making the traffic speed prediction a crucial and fundamental task. This task is difficult due to the dynamic and intrinsic complexity of the traffic environment in urban cities, yet the emergence of crowd map query data sheds new light on it. In general, a burst of crowd map queries for the same destination in a short duration (called "hotspot'') could lead to traffic congestion. For example, queries of the Capital Gym burst on weekend evenings lead to traffic jams around the gym. However, unleashing the power of crowd map queries is challenging due to the innate spatiotemporal characteristics of the crowd queries. To bridge the gap, this paper firstly discovers hotspots underlying crowd map queries. These discovered hotspots address the spatiotemporal variations. Then Dest-ResNet (Deep spatiotemporal Residual Network) is proposed for hotspot traffic speed prediction. Dest-ResNet is a sequence learning framework that jointly deals with two sequences in different modalities, i.e., the traffic speed sequence and the query sequence. The main idea of Dest-ResNet is to learn to explain and amend the errors caused when the unimodal information is applied individually. In this way, Dest-ResNet addresses the temporal causal correlation between queries and the traffic speed. As a result, Dest-ResNet shows a 30% relative boost over the state-of-the-art methods on real-world datasets from Baidu Map.
Deep Autoencoder Neural Networks for Short-Term Traffic Congestion Prediction of Transportation Networks. Traffic congestion prediction is critical for implementing intelligent transportation systems for improving the efficiency and capacity of transportation networks. However, despite its importance, traffic congestion prediction is severely less investigated compared to traffic flow prediction, which is partially due to the severe lack of large-scale high-quality traffic congestion data and advanced algorithms. This paper proposes an accessible and general workflow to acquire large-scale traffic congestion data and to create traffic congestion datasets based on image analysis. With this workflow we create a dataset named Seattle Area Traffic Congestion Status (SATCS) based on traffic congestion map snapshots from a publicly available online traffic service provider Washington State Department of Transportation. We then propose a deep autoencoder-based neural network model with symmetrical layers for the encoder and the decoder to learn temporal correlations of a transportation network and predicting traffic congestion. Our experimental results on the SATCS dataset show that the proposed DCPN model can efficiently and effectively learn temporal relationships of congestion levels of the transportation network for traffic congestion forecasting. Our method outperforms two other state-of-the-art neural network models in prediction performance, generalization capability, and computation efficiency.
A survey on machine learning for data fusion. •We sum up a group of main challenges that data fusion might face.•We propose a thorough list of requirements to evaluate data fusion methods.•We review the literature of data fusion based on machine learning.•We comment on how a machine learning method can ameliorate fusion performance.•We present significant open issues and valuable future research directions.
Big data and its technical challenges Exploring the inherent technical challenges in realizing the potential of Big Data.
Predicting Multi-step Citywide Passenger Demands Using Attention-based Neural Networks. Predicting passenger pickup/dropoff demands based on historical mobility trips has been of great importance towards better vehicle distribution for the emerging mobility-on-demand (MOD) services. Prior works focused on predicting next-step passenger demands at selected locations or hotspots. However, we argue that multi-step citywide passenger demands encapsulate both time-varying demand trends and global statuses, and hence are more beneficial to avoiding demand-service mismatching and developing effective vehicle distribution/scheduling strategies. In this paper, we propose an end-to-end deep neural network solution to the prediction task. We employ the encoder-decoder framework based on convolutional and ConvLSTM units to identify complex features that capture spatiotemporal influences and pickup-dropoff interactions on citywide passenger demands. A novel attention model is incorporated to emphasize the effects of latent citywide mobility regularities. We evaluate our proposed method using real-word mobility trips (taxis and bikes) and the experimental results show that our method achieves higher prediction accuracy than the adaptations of the state-of-the-art approaches.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
DEAP: A Database for Emotion Analysis ;Using Physiological Signals We present a multimodal data set for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance, and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool. An extensive analysis of the participants' ratings during the experiment is presented. Correlates between the EEG signal frequencies and the participants' ratings are investigated. Methods and results are presented for single-trial classification of arousal, valence, and like/dislike ratings using the modalities of EEG, peripheral physiological signals, and multimedia content analysis. Finally, decision fusion of the classification results from different modalities is performed. The data set is made publicly available and we encourage other researchers to use it for testing their own affective state estimation methods.
Point-to-point navigation of underactuated ships This paper considers point-to-point navigation of underactuated ships where only surge force and yaw moment are available. In general, a ship’s sway motion satisfies a passive-boundedness property which is expressed in terms of a Lyapunov function. Under this kind of consideration, a certain concise nonlinear scheme is proposed to guarantee the closed-loop system to be uniformly ultimately bounded (UUB). A numerical simulation study is also performed to illustrate the effectiveness of the proposed scheme.
Efficient and Low Latency Detection of Intruders in Mobile Active Authentication. Active authentication (AA) refers to the problem of continuously verifying the identity of a mobile device user for the purpose of securing the device. We address the problem of quickly detecting intrusions with lower false detection rates in mobile AA systems with higher resource efficiency. Bayesian and MiniMax versions of the quickest change detection (QCD) algorithms are introduced to quickly ...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.047572
0.04
0.04
0.04
0.04
0.04
0.04
0.008012
0.000133
0
0
0
0
0
An improved long short-term memory networks with Takagi-Sugeno fuzzy for traffic speed prediction considering abnormal traffic situation. Traffic speed prediction is an emerging paradigm for achieving a better transportation system in smart cities and improving the heavy traffic management in the intelligent transportation system (ITS). The accurate traffic speed prediction is affected by many contextual factors such as abnormal traffic conditions, traffic incidents, lane closures due to construction or events, and traffic congestion. To overcome these problems, we propose a new method named fuzzy optimized long short-term memory (FOLSTM) neural network for long-term traffic speed prediction. FOLSTM technique is a hybrid method composed of computational intelligence (CI), machine learning (ML), and metaheuristic techniques, capable of predicting the speed for macroscopic traffic key parameters. First, the proposed hybrid unsupervised learning method, agglomerated hierarchical K-means (AHK) clustering, divides the input samples into a group of clusters. Second, based on parameters the Gaussian bell-shaped fuzzy membership function calculates the degree of membership (high, low, and medium) for each cluster using Takagi-Sugeno fuzzy rules. Finally, the whale optimization algorithm (WOA) is used in LSTM to optimize the parameters obtained by fuzzy rules and calculate the optimal weight value. FOLSTM evaluates the accurate traffic speed from the abnormal traffic data to overcome the nonlinear characteristics. Experimental results demonstrated that our proposed method outperforms the state-of-the-art approaches in terms of metrics such as mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE).
Knowledge harvesting in the big-data era The proliferation of knowledge-sharing communities such as Wikipedia and the progress in scalable information extraction from Web and text sources have enabled the automatic construction of very large knowledge bases. Endeavors of this kind include projects such as DBpedia, Freebase, KnowItAll, ReadTheWeb, and YAGO. These projects provide automatically constructed knowledge bases of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and hundreds of millions of facts about them. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, semantic search for entities and relations in Web and enterprise data, and entity-oriented analytics over unstructured contents. Prominent examples of how knowledge bases can be harnessed include the Google Knowledge Graph and the IBM Watson question answering system. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis will be on the twofold role of knowledge bases for big-data analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and better intelligence with Big Data.
Reservoir computing approaches to recurrent neural network training Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help in unifying the field and providing the reader with a detailed “map” of it.
Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes" Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation x into a class y; the choice depends on the relative confidence we have in the correctness of the specification of either p(y|x) or p(x, y) for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA-驴) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA-驴 or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers.
Dest-ResNet: A Deep Spatiotemporal Residual Network for Hotspot Traffic Speed Prediction. With the ever-increasing urbanization process, the traffic jam has become a common problem in the metropolises around the world, making the traffic speed prediction a crucial and fundamental task. This task is difficult due to the dynamic and intrinsic complexity of the traffic environment in urban cities, yet the emergence of crowd map query data sheds new light on it. In general, a burst of crowd map queries for the same destination in a short duration (called "hotspot'') could lead to traffic congestion. For example, queries of the Capital Gym burst on weekend evenings lead to traffic jams around the gym. However, unleashing the power of crowd map queries is challenging due to the innate spatiotemporal characteristics of the crowd queries. To bridge the gap, this paper firstly discovers hotspots underlying crowd map queries. These discovered hotspots address the spatiotemporal variations. Then Dest-ResNet (Deep spatiotemporal Residual Network) is proposed for hotspot traffic speed prediction. Dest-ResNet is a sequence learning framework that jointly deals with two sequences in different modalities, i.e., the traffic speed sequence and the query sequence. The main idea of Dest-ResNet is to learn to explain and amend the errors caused when the unimodal information is applied individually. In this way, Dest-ResNet addresses the temporal causal correlation between queries and the traffic speed. As a result, Dest-ResNet shows a 30% relative boost over the state-of-the-art methods on real-world datasets from Baidu Map.
Deep Autoencoder Neural Networks for Short-Term Traffic Congestion Prediction of Transportation Networks. Traffic congestion prediction is critical for implementing intelligent transportation systems for improving the efficiency and capacity of transportation networks. However, despite its importance, traffic congestion prediction is severely less investigated compared to traffic flow prediction, which is partially due to the severe lack of large-scale high-quality traffic congestion data and advanced algorithms. This paper proposes an accessible and general workflow to acquire large-scale traffic congestion data and to create traffic congestion datasets based on image analysis. With this workflow we create a dataset named Seattle Area Traffic Congestion Status (SATCS) based on traffic congestion map snapshots from a publicly available online traffic service provider Washington State Department of Transportation. We then propose a deep autoencoder-based neural network model with symmetrical layers for the encoder and the decoder to learn temporal correlations of a transportation network and predicting traffic congestion. Our experimental results on the SATCS dataset show that the proposed DCPN model can efficiently and effectively learn temporal relationships of congestion levels of the transportation network for traffic congestion forecasting. Our method outperforms two other state-of-the-art neural network models in prediction performance, generalization capability, and computation efficiency.
A survey on machine learning for data fusion. •We sum up a group of main challenges that data fusion might face.•We propose a thorough list of requirements to evaluate data fusion methods.•We review the literature of data fusion based on machine learning.•We comment on how a machine learning method can ameliorate fusion performance.•We present significant open issues and valuable future research directions.
Discovering spatio-temporal causal interactions in traffic data streams The detection of outliers in spatio-temporal traffic data is an important research problem in the data mining and knowledge discovery community. However to the best of our knowledge, the discovery of relationships, especially causal interactions, among detected traffic outliers has not been investigated before. In this paper we propose algorithms which construct outlier causality trees based on temporal and spatial properties of detected outliers. Frequent substructures of these causality trees reveal not only recurring interactions among spatio-temporal outliers, but potential flaws in the design of existing traffic networks. The effectiveness and strength of our algorithms are validated by experiments on a very large volume of real taxi trajectories in an urban road network.
A new approach for dynamic fuzzy logic parameter tuning in Ant Colony Optimization and its application in fuzzy control of a mobile robot Central idea is to avoid or slow down full convergence through the dynamic variation of parameters.Performance of different ACO variants was observed to choose one as the basis to the proposed approach.Convergence fuzzy controller with the objective of maintaining diversity to avoid premature convergence was created. Ant Colony Optimization is a population-based meta-heuristic that exploits a form of past performance memory that is inspired by the foraging behavior of real ants. The behavior of the Ant Colony Optimization algorithm is highly dependent on the values defined for its parameters. Adaptation and parameter control are recurring themes in the field of bio-inspired optimization algorithms. The present paper explores a new fuzzy approach for diversity control in Ant Colony Optimization. The main idea is to avoid or slow down full convergence through the dynamic variation of a particular parameter. The performance of different variants of the Ant Colony Optimization algorithm is analyzed to choose one as the basis to the proposed approach. A convergence fuzzy logic controller with the objective of maintaining diversity at some level to avoid premature convergence is created. Encouraging results on several traveling salesman problem instances and its application to the design of fuzzy controllers, in particular the optimization of membership functions for a unicycle mobile robot trajectory control are presented with the proposed method.
Adaptive Navigation Support Adaptive navigation support is a specific group of technologies that support user navigation in hyperspace, by adapting to the goals, preferences and knowledge of the individual user. These technologies, originally developed in the field of adaptive hypermedia, are becoming increasingly important in several adaptive Web applications, ranging from Web-based adaptive hypermedia to adaptive virtual reality. This chapter provides a brief introduction to adaptive navigation support, reviews major adaptive navigation support technologies and mechanisms, and illustrates these with a range of examples.
Learning to Predict Driver Route and Destination Intent For many people, driving is a routine activity where people drive to the same destinations using the same routes on a regular basis. Many drivers, for example, will drive to and from work along a small set of routes, at about the same time every day of the working week. Similarly, although a person may shop on different days or at different times, they will often visit the same grocery store(s). In this paper, we present a novel approach to predicting driver intent that exploits the predictable nature of everyday driving. Our approach predicts a driver's intended route and destination through the use of a probabilistic model learned from observation of their driving habits. We show that by using a low-cost GPS sensor and a map database, it is possible to build a hidden Markov model (HMM) of the routes and destinations used by the driver. Furthermore, we show that this model can be used to make accurate predictions of the driver's destination and route through on-line observation of their GPS position during the trip. We present a thorough evaluation of our approach using a corpus of almost a month of real, everyday driving. Our results demonstrate the effectiveness of the approach, achieving approximately 98% accuracy in most cases. Such high performance suggests that the method can be harnessed for improved safety monitoring, route planning taking into account traffic density, and better trip duration prediction
A Minimal Set Of Coordinates For Describing Humanoid Shoulder Motion The kinematics of the anatomical shoulder are analysed and modelled as a parallel mechanism similar to a Stewart platform. A new method is proposed to describe the shoulder kinematics with minimal coordinates and solve the indeterminacy. The minimal coordinates are defined from bony landmarks and the scapulothoracic kinematic constraints. Independent from one another, they uniquely characterise the shoulder motion. A humanoid mechanism is then proposed with identical kinematic properties. It is then shown how minimal coordinates can be obtained for this mechanism and how the coordinates simplify both the motion-planning task and trajectory-tracking control. Lastly, the coordinates are also shown to have an application in the field of biomechanics where they can be used to model the scapulohumeral rhythm.
Massive MIMO Antenna Selection: Switching Architectures, Capacity Bounds, and Optimal Antenna Selection Algorithms. Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, w...
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.028571
0
0
0
0
0
0
A Vehicle-Uav Operation Scheme For Instant Delivery Instant delivery by ground vehicles combined with unmanned aerial vehicles (UAVs) will greatly expand its coverage range while expanding its application. In this paper, a novel operation scheme with vehicles and UAVs for instant delivery is presented. This scheme consists of four key processes: the locations of vehicle stops, the allocation of customers to vehicle stops, the allocation of customers to UAVs, and route planning for vehicles. We formulate a capacitated set covering location model to determine the number and feasible locations of vehicle stops. Moreover, we present a multilevel model to optimize the decisions on the remaining processes and finally determine the locations of vehicle stops while minimizing the number of vehicles dispatched and the total travel time. Furthermore, we propose two advanced ant colony optimization (ACO) by introducing variable visibility and multilevel feedback pheromones. Experiments are applied to prove the effectiveness of the operation scheme with vehicles and UAVs.
On the min-cost Traveling Salesman Problem with Drone •We added a paragraph explaining the waiting costs based on the reviewer’s comment.•We fixed all the minor issues.•We have updated the latest references following the editor’s comments.•We prepared the paper following the editor’s comments.
Algorithms and experiments on routing of unmanned aerial vehicles with mobile recharging stations. We study the problem of planning a tour for an energy-limited Unmanned Aerial Vehicle (UAV) to visit a set of sites in the least amount of time. We envision scenarios where the UAV can be recharged at a site or along an edge either by landing on stationary recharging stations or on Unmanned Ground Vehicles (UGVs) acting as mobile recharging stations. This leads to a new variant of the Traveling Salesperson Problem (TSP) with mobile recharging stations. We present an algorithm that finds not only the order in which to visit the sites but also when and where to land on the charging stations to recharge. Our algorithm plans tours for the UGVs as well as determines the best locations to place stationary charging stations. We study three variants for charging: Multiple stationary charging stations, single mobile charging station, and multiple mobile charging stations. As the problems we study are nondeterministic polynomial time (NP)-Hard, we present a practical solution using Generalized TSP that finds the optimal solution that minimizes the total time, subject to the discretization of battery levels. If the UGVs are slower than the UAVs, then the algorithm also finds the minimum number of UGVs required to support the UAV mission such that the UAV is not required to wait for the UGV. Our simulation results show that the running time is acceptable for reasonably sized instances in practice. We evaluate the performance of our algorithm through simulations and proof-of-concept field experiments with a fully autonomous system of one UAV and UGV.
Cost-Profit Trade-Off for Optimally Locating Automotive Service Firms Under Uncertainty This work investigates the problem of optimally locating an automotive service firm (ASF) subject to stochastic customer demands, varying setup cost and regional constraints. The goal is to minimize the transportation cost while maintaining the specified profit of the ASF. This work studies two variants of the problem: ASF location with known demand probability distributions and with partial demand information, i.e., only the support and mean of the customer demands are known. For the former, a chance-constrained program is formulated that improves an existing model, and then an equivalent deterministic nonlinear program is constructed based on our property analysis results. For the latter, a novel distribution-free model is developed. The proposed models are solved by solver LINGO. Computational results on the benchmark examples show that: i) for the first variant, the proposed approach outperforms the existing one; ii) for the second one, the proposed distribution-free model can effectively handle stochastic customer demands without complete probability distributions; and iii) the results of the distribution-free model are slightly worse than those of the deterministic nonlinear one, but the former is more cost-efficient for the practical ASF location as it is less expensive in obtaining demand information. Moreover, the proposed models and approaches are extended to address a multi-ASF location allocation under demand uncertainty.
Real-Time Adaptive Intelligent Control System for Quadcopter Unmanned Aerial Vehicles With Payload Uncertainties A novel bidirectional fuzzy brain emotional learning (BFBEL) controller is proposed to control a class of uncertain nonlinear systems such as the quadcopter unmanned aerial vehicle (QUAV). The proposed BFBEL controller is nonmodel-based and has a simplified fuzzy neural network structure and adapts with a novel bidirectional brain emotional learning algorithm. It is applied to control all six degr...
Vehicle Routing Problems for Drone Delivery. Unmanned aerial vehicles, or drones, have the potential to significantly reduce the cost and time of making last-mile deliveries and responding to emergencies. Despite this potential, little work has gone into developing vehicle routing problems (VRPs) specifically for drone delivery scenarios. Existing VRPs are insufficient for planning drone deliveries: either multiple trips to the depot are not permitted, leading to solutions with excess drones, or the effect of battery and payload weight on energy consumption is not considered, leading to costly or infeasible routes. We propose two multitrip VRPs for drone delivery that address both issues. One minimizes costs subject to a delivery time limit, while the other minimizes the overall delivery time subject to a budget constraint. We mathematically derive and experimentally validate an energy consumption model for multirotor drones, demonstrating that energy consumption varies approximately linearly with payload and battery weight. We use this approximation to derive mixed integer linear programs for our VRPs. We propose a cost function that considers our energy consumption model and drone reuse, and apply it in a simulated annealing (SA) heuristic for finding suboptimal solutions to practical scenarios. To assist drone delivery practitioners with balancing cost and delivery time, the SA heuristic is used to show that the minimum cost has an inverse exponential relationship with the delivery time limit, and the minimum overall delivery time has an inverse exponential relationship with the budget. Numerical results confirm the importance of reusing drones and optimizing battery size in drone delivery VRPs.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
2 Algorithms For Constructing A Delaunay Triangulation This paper provides a unified discussion of the Delaunay triangulation. Its geometric properties are reviewed and several applications are discussed. Two algorithms are presented for constructing the triangulation over a planar set ofN points. The first algorithm uses a divide-and-conquer approach. It runs inO(N logN) time, which is asymptotically optimal. The second algorithm is iterative and requiresO(N2) time in the worst case. However, its average case performance is comparable to that of the first algorithm.
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking
Wireless Networks with RF Energy Harvesting: A Contemporary Survey Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirements. In this paper, we present a comprehensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RFEHNs according to the network types, i.e., single-hop networks, multi-antenna networks, relay networks, and cognitive radio networks. Finally, we envision some open research directions.
An indoor localization solution using Bluetooth RSSI and multiple sensors on a smartphone. In this paper, we propose an indoor positioning system using a Bluetooth receiver, an accelerometer, a magnetic field sensor, and a barometer on a smartphone. The Bluetooth receiver is used to estimate distances from beacons. The accelerometer and magnetic field sensor are used to trace the movement of moving people in the given space. The horizontal location of the person is determined by received signal strength indications (RSSIs) and the traced movement. The barometer is used to measure the vertical position where a person is located. By combining RSSIs, the traced movement, and the vertical position, the proposed system estimates the indoor position of moving people. In experiments, the proposed approach showed excellent performance in localization with an overall error of 4.8%.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
An improved sine–cosine algorithm based on orthogonal parallel information for global optimization Many real-life optimization applications are characterized by the presence of some difficulties such as discontinuity, mixing continuity–discontinuity, prohibited zones, non-smooth and non-convex cost functions. In this sense, traditional optimization algorithms may be stuck in local optima when dealing with these natures. Recently, sine–cosine algorithm (SCA) has been introduced as a global optimization technique for solving optimization problems. However, as a new algorithm, the sucking in local optimal may be occurred due to two reasons. The first is that the diversity of solutions may not be maintained efficiently. The second is that no emphasizing strategy is employed to guide the search toward the promising region. In this paper, a novel SCA based on orthogonal parallel information (SCA-OPI) for solving numerical optimization problems is proposed. In SCA-OPI, a multiple-orthogonal parallel information is introduced to exhibit effectively two advantages: the orthogonal aspect of information enables the algorithm to maintain the diversity and enhances the exploration search, while the parallelized scheme enables the algorithm to achieve the promising solutions and emphases the exploitation search. Further, an experience-based opposition direction strategy is presented to preserve the exploration ability. The proposed SCA-OPI algorithm is evaluated and investigated on different benchmark problems and some engineering applications. The results affirmed that the SCA-OPI algorithm can achieve a highly competitive performance compared with different algorithms, especially in terms of optimality and reliability.
Multi-stage genetic programming: A new strategy to nonlinear system modeling This paper presents a new multi-stage genetic programming (MSGP) strategy for modeling nonlinear systems. The proposed strategy is based on incorporating the individual effect of predictor variables and the interactions among them to provide more accurate simulations. According to the MSGP strategy, an efficient formulation for a problem comprises different terms. In the first stage of the MSGP-based analysis, the output variable is formulated in terms of an influencing variable. Thereafter, the error between the actual and the predicted value is formulated in terms of a new variable. Finally, the interaction term is derived by formulating the difference between the actual values and the values predicted by the individually developed terms. The capabilities of MSGP are illustrated by applying it to the formulation of different complex engineering problems. The problems analyzed herein include the following: (i) simulation of pH neutralization process, (ii) prediction of surface roughness in end milling, and (iii) classification of soil liquefaction conditions. The validity of the proposed strategy is confirmed by applying the derived models to the parts of the experimental results that were not included in the analyses. Further, the external validation of the models is verified using several statistical criteria recommended by other researchers. The MSGP-based solutions are capable of effectively simulating the nonlinear behavior of the investigated systems. The results of MSGP are found to be more accurate than those of standard GP and artificial neural network-based models.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Hybrid Whale Optimization Algorithm with simulated annealing for feature selection. •Four hybrid feature selection methods for classification task are proposed.•Our hybrid method combines Whale Optimization Algorithm with simulated annealing.•Eighteen UCI datasets were used in the experiments.•Our approaches result a higher accuracy by using less number of features.
Solving the dynamic weapon target assignment problem by an improved artificial bee colony algorithm with heuristic factor initialization. •Put forward an improved artificial bee colony algorithm based on ranking selection and elite guidance.•Put forward 4 rule-based heuristic factors: Wc, Rc, TRc and TRcL.•The heuristic factors are used in population initialization to improve the quality of the initial solutions in DWTA solving.•The heuristic factor initialization method is combined with the improved ABC algorithm to solve the DWTA problem.
Self-adaptive mutation differential evolution algorithm based on particle swarm optimization Differential evolution (DE) is an effective evolutionary algorithm for global optimization, and widely applied to solve different optimization problems. However, the convergence speed of DE will be slower in the later stage of the evolution and it is more likely to get stuck at a local optimum. Moreover, the performance of DE is sensitive to its mutation strategies and control parameters. Therefore, a self-adaptive mutation differential evolution algorithm based on particle swarm optimization (DEPSO) is proposed to improve the optimization performance of DE. DEPSO can effectively utilize an improved DE/rand/1 mutation strategy with stronger global exploration ability and PSO mutation strategy with higher convergence ability. As a result, the population diversity can be maintained well in the early stage of the evolution, and the faster convergence speed can be obtained in the later stage of the evolution. The performance of the proposed DEPSO is evaluated on 30-dimensional and 100-dimensional functions. The experimental results indicate that DEPSO can significantly improve the global convergence performance of the conventional DE and thus avoid premature convergence, and its average performance is better than those of the conventional DE, PSO and the compared algorithms. Moreover, DEPSO is applied to solve arrival flights scheduling and the optimization results show that it can optimize the sequence and decrease the delay time.
An improved artificial bee colony algorithm for balancing local and global search behaviors in continuous optimization The artificial bee colony, ABC for short, algorithm is population-based iterative optimization algorithm proposed for solving the optimization problems with continuously-structured solution space. Although ABC has been equipped with powerful global search capability, this capability can cause poor intensification on found solutions and slow convergence problem. The occurrence of these issues is originated from the search equations proposed for employed and onlooker bees, which only updates one decision variable at each trial. In order to address these drawbacks of the basic ABC algorithm, we introduce six search equations for the algorithm and three of them are used by employed bees and the rest of equations are used by onlooker bees. Moreover, each onlooker agent can modify three dimensions or decision variables of a food source at each attempt, which represents a possible solution for the optimization problems. The proposed variant of ABC algorithm is applied to solve basic, CEC2005, CEC2014 and CEC2015 benchmark functions. The obtained results are compared with results of the state-of-art variants of the basic ABC algorithm, artificial algae algorithm, particle swarm optimization algorithm and its variants, gravitation search algorithm and its variants and etc. Comparisons are conducted for measurement of the solution quality, robustness and convergence characteristics of the algorithms. The obtained results and comparisons show the experimentally validation of the proposed ABC variant and success in solving the continuous optimization problems dealt with the study.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning The Internet of vehicles (IoV) is a large information interaction network that collects information on vehicles, roads and pedestrians. One of the important uses of vehicle networks is to meet the entertainment needs of driving users through communication between vehicles and roadside units (RSUs). Due to the limited storage space of RSUs, determining the content cached in each RSU is a key challenge. With the development of 5G and video editing technology, short video systems have become increasingly popular. Current widely used cache update methods, such as partial file precaching and content popularity- and user interest-based determination, are inefficient for such systems. To solve this problem, this paper proposes a QoE-driven edge caching method for the IoV based on deep reinforcement learning. First, a class-based user interest model is established. Compared with the traditional file popularity- and user interest distribution-based cache update methods, the proposed method is more suitable for systems with a large number of small files. Second, a quality of experience (QoE)-driven RSU cache model is established based on the proposed class-based user interest model. Third, a deep reinforcement learning method is designed to address the QoE-driven RSU cache update issue effectively. The experimental results verify the effectiveness of the proposed algorithm.
Image information and visual quality Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.
Stabilization of switched continuous-time systems with all modes unstable via dwell time switching Stabilization of switched systems composed fully of unstable subsystems is one of the most challenging problems in the field of switched systems. In this brief paper, a sufficient condition ensuring the asymptotic stability of switched continuous-time systems with all modes unstable is proposed. The main idea is to exploit the stabilization property of switching behaviors to compensate the state divergence made by unstable modes. Then, by using a discretized Lyapunov function approach, a computable sufficient condition for switched linear systems is proposed in the framework of dwell time; it is shown that the time intervals between two successive switching instants are required to be confined by a pair of upper and lower bounds to guarantee the asymptotic stability. Based on derived results, an algorithm is proposed to compute the stability region of admissible dwell time. A numerical example is proposed to illustrate our approach.
Software-Defined Networking: A Comprehensive Survey The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms - with a focus on aspects such as resiliency, scalability, performance, security, and dependability - as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.
An ID-Based Linearly Homomorphic Signature Scheme and Its Application in Blockchain. Identity-based cryptosystems mean that public keys can be directly derived from user identifiers, such as telephone numbers, email addresses, and social insurance number, and so on. So they can simplify key management procedures of certificate-based public key infrastructures and can be used to realize authentication in blockchain. Linearly homomorphic signature schemes allow to perform linear computations on authenticated data. And the correctness of the computation can be publicly verified. Although a series of homomorphic signature schemes have been designed recently, there are few homomorphic signature schemes designed in identity-based cryptography. In this paper, we construct a new ID-based linear homomorphic signature scheme, which avoids the shortcomings of the use of public-key certificates. The scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. The ID-based linearly homomorphic signature schemes can be applied in e-business and cloud computing. Finally, we show how to apply it to realize authentication in blockchain.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
Online Detection of Driver Fatigue Using Steering Wheel Angles for Real Driving Conditions. This paper presents a drowsiness on-line detection system for monitoring driver fatigue level under real driving conditions, based on the data of steering wheel angles (SWA) collected from sensors mounted on the steering lever. The proposed system firstly extracts approximate entropy (ApEn) features from fixed sliding windows on real-time steering wheel angles time series. After that, this system linearizes the ApEn features series through an adaptive piecewise linear fitting using a given deviation. Then, the detection system calculates the warping distance between the linear features series of the sample data. Finally, this system uses the warping distance to determine the drowsiness state of the driver according to a designed binary decision classifier. The experimental data were collected from 14.68 h driving under real road conditions, including two fatigue levels: "wake" and "drowsy". The results show that the proposed system is capable of working online with an average 78.01% accuracy, 29.35% false detections of the "awake" state, and 15.15% false detections of the "drowsy" state. The results also confirm that the proposed method based on SWA signal is valuable for applications in preventing traffic accidents caused by driver fatigue.
Analysing user physiological responses for affective video summarisation. Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.
On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes Driver behavioral cues may present a rich source of information and feedback for future intelligent advanced driver-assistance systems (ADASs). With the design of a simple and robust ADAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. Eye gaze may provide a more accurate proxy than head movement for determining driver attention, whereas the measurement of head motion is less cumbersome and more reliable in harsh driving conditions. We use a lane-change intent-prediction system (McCall et al., 2007) to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane-change maneuver at a particular point in the future. Quantitative results from a naturalistic driving study are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction. The addition of eye gaze does not improve performance as much as simpler head dynamics cues. The advantage of head data over eye data is shown to be statistically significant (p
Detection of Driver Fatigue Caused by Sleep Deprivation This paper aims to provide reliable indications of driver drowsiness based on the characteristics of driver-vehicle interaction. A test bed was built under a simulated driving environment, and a total of 12 subjects participated in two experiment sessions requiring different levels of sleep (partial sleep-deprivation versus no sleep-deprivation) before the experiment. The performance of the subjects was analyzed in a series of stimulus-response and routine driving tasks, which revealed the performance differences of drivers under different sleep-deprivation levels. The experiments further demonstrated that sleep deprivation had greater effect on rule-based than on skill-based cognitive functions: when drivers were sleep-deprived, their performance of responding to unexpected disturbances degraded, while they were robust enough to continue the routine driving tasks such as lane tracking, vehicle following, and lane changing. In addition, we presented both qualitative and quantitative guidelines for designing drowsy-driver detection systems in a probabilistic framework based on the paradigm of Bayesian networks. Temporal aspects of drowsiness and individual differences of subjects were addressed in the framework.
Online Prediction of Driver Distraction Based on Brain Activity Patterns This paper presents a new computational framework for early detection of driver distractions (map viewing) using brain activity measured by electroencephalographic (EEG) signals. Compared with most studies in the literature, which are mainly focused on the classification of distracted and nondistracted periods, this study proposes a new framework to prospectively predict the start and end of a distraction period, defined by map viewing. The proposed prediction algorithm was tested on a data set of continuous EEG signals recorded from 24 subjects. During the EEG recordings, the subjects were asked to drive from an initial position to a destination using a city map in a simulated driving environment. The overall accuracy values for the prediction of the start and the end of map viewing were 81% and 70%, respectively. The experimental results demonstrated that the proposed algorithm can predict the start and end of map viewing with relatively high accuracy and can be generalized to individual subjects. The outcome of this study has a high potential to improve the design of future intelligent navigation systems. Prediction of the start of map viewing can be used to provide route information based on a driver's needs and consequently avoid map-viewing activities. Prediction of the end of map viewing can be used to provide warnings for potential long map-viewing durations. Further development of the proposed framework and its applications in driver-distraction predictions are also discussed.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
A CRNN module for hand pose estimation. •The input is no longer a single frame, but a sequence of several adjacent frames.•A CRNN module is proposed, which is basically the same as the standard RNN, except that it uses convolutional connection.•When the difference in the feature image of a certain layer is large, it is better to add CRNN / RNN after this layer.•Our method has the lowest error of output compared to the current state-of-the-art methods.
Deep convolutional neural network-based Bernoulli heatmap for head pose estimation Head pose estimation is a crucial problem for many tasks, such as driver attention, fatigue detection, and human behaviour analysis. It is well known that neural networks are better at handling classification problems than regression problems. It is an extremely nonlinear process to let the network output the angle value directly for optimization learning, and the weight constraint of the loss function will be relatively weak. This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image. Our method can achieve the positioning of the head area while estimating the angles of the head. The Bernoulli heatmap makes it possible to construct fully convolutional neural networks without fully connected layers and provides a new idea for the output form of head pose estimation. A deep convolutional neural network (CNN) structure with multiscale representations is adopted to maintain high-resolution information and low-resolution information in parallel. This kind of structure can maintain rich, high-resolution representations. In addition, channelwise fusion is adopted to make the fusion weights learnable instead of simple addition with equal weights. As a result, the estimation is spatially more precise and potentially more accurate. The effectiveness of the proposed method is empirically demonstrated by comparing it with other state-of-the-art methods on public datasets.
Reinforcement learning based data fusion method for multi-sensors In order to improve detection system robustness and reliability, multi-sensors fusion is used in modern air combat. In this paper, a data fusion method based on reinforcement learning is developed for multi-sensors. Initially, the cubic B-spline interpolation is used to solve time alignment problems of multisource data. Then, the reinforcement learning based data fusion (RLBDF) method is proposed to obtain the fusion results. With the case that the priori knowledge of target is obtained, the fusion accuracy reinforcement is realized by the error between fused value and actual value. Furthermore, the Fisher information is instead used as the reward if the priori knowledge is unable to be obtained. Simulations results verify that the developed method is feasible and effective for the multi-sensors data fusion in air combat.
Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning The literature on short-term traffic flow forecasting has undergone great development recently. Many works, describing a wide variety of different approaches, which very often share similar features and ideas, have been published. However, publications presenting new prediction algorithms usually employ different settings, data sets, and performance measurements, making it difficult to infer a clear picture of the advantages and limitations of each model. The aim of this paper is twofold. First, we review existing approaches to short-term traffic flow forecasting methods under the common view of probabilistic graphical models, presenting an extensive experimental comparison, which proposes a common baseline for their performance analysis and provides the infrastructure to operate on a publicly available data set. Second, we present two new support vector regression models, which are specifically devised to benefit from typical traffic flow seasonality and are shown to represent an interesting compromise between prediction accuracy and computational efficiency. The SARIMA model coupled with a Kalman filter is the most accurate model; however, the proposed seasonal support vector regressor turns out to be highly competitive when performing forecasts during the most congested periods.
TSCA: A Temporal-Spatial Real-Time Charging Scheduling Algorithm for On-Demand Architecture in Wireless Rechargeable Sensor Networks. The collaborative charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles (WCVs) to sensors, providing a new paradigm to prolong network lifetime. Existing techniques on collaborative charging usually take the periodical and deterministic approach, but neglect influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for large-scale WRSNs. In this paper, we develop a temporal-spatial charging scheduling algorithm, namely TSCA, for the on-demand charging architecture. We aim to minimize the number of dead nodes while maximizing energy efficiency to prolong network lifetime. First, after gathering charging requests, a WCV will compute a feasible movement solution. A basic path planning algorithm is then introduced to adjust the charging order for better efficiency. Furthermore, optimizations are made in a global level. Then, a node deletion algorithm is developed to remove low efficient charging nodes. Lastly, a node insertion algorithm is executed to avoid the death of abandoned nodes. Extensive simulations show that, compared with state-of-the-art charging scheduling algorithms, our scheme can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.
A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems In this paper, to eliminate the tracking error by using adaptive dynamic programming (ADP) algorithms, a novel formulation of the value function is presented for the optimal tracking problem (TP) of nonlinear discrete-time systems. Unlike existing ADP methods, this formulation introduces the control input into the tracking error, and ignores the quadratic form of the control input directly, which makes the boundedness and convergence of the value function independent of the discount factor. Based on the proposed value function, the optimal control policy can be deduced without considering the reference control input. Value iteration (VI) and policy iteration (PI) methods are applied to prove the optimality of the obtained control policy, and derived the monotonicity property and convergence of the iterative value function. Simulation examples realized with neural networks and the actor–critic structure are provided to verify the effectiveness of the proposed ADP algorithm.
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
The Challenge Of Using The W Band In Satellite Communication This contribution outlines the scenario of the expected atmospheric impairments, which affect a satellite radio link operating in the W band, as derived by the present theoretical and experimental knowledge. The paper discusses the contributions to signal fade due to gases, clouds, scintillation and rain (with emphasis also on the impact of the hydrometeor size distribution), as well as to the depolarization of the electromagnetic waves. The main objective is to assess the constraints to face in the design of a satellite telecommunication system operating in the W band. Copyright (C) 2013 John Wiley & Sons, Ltd.
Ka-band scintillations: measurements and model predictions New propagation data from a 30/20-GHz propagation experiment at several US sites, including Fairbanks, AK, and Norman, OK, are presented to examine existing models for scintillations. Beacon measurements were collected at one sample per second continuously and at 20 samples per second for selected intervals. The widely separated measurement frequencies and the wide range of measurement elevation a...
Modeling Ka-band scintillation as a fractal process We propose a model that describes the signal fading process due to scintillation in the presence of rain. We analyzed a data set of uplink (30 GHz) and downlink (20 GHz) attenuation values averaged over 1 s intervals. The data are samples relative to ten significant events, for a total of 180 000 s recorded at the Spino d'Adda (North of Italy) station using the Olympus satellite. Our analysis is based on the fact that the plot of attenuation versus time recalls the behavior of a self-similar process. We then make various considerations, and propose, a fractional Brownian motion model for the scintillation process. We describe the model in detail, with pictures showing the apparent self-similarity of the measured data. We then show that the Hurst parameter of the process is a simple function of the rain fade. We describe a method for producing random data that interpolate the measured samples, while preserving some of their interesting statistical properties. This method can be used for simulating fade countermeasure systems. As a possible application of the model, we show how to optimize fade measurement times for fade countermeasure systems
On the Mitigation of Ionospheric Scintillation in Advanced GNSS Receivers. Ionospheric scintillation is one of the major threats and most challenging propagation scenarios affecting Global Navigation Satellite Systems (GNSS) and related applications. The fact that this phenomenon causes severe degradations only in equatorial and high latitude regions has led to very few contributions dealing with the fundamental scintillation mitigation problem, being of paramount import...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
A Tutorial On Visual Servo Control This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices. Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Parameter tuning for configuring and analyzing evolutionary algorithms In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a three-tier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instances, parameters, and EA performance measures as major factors, and discuss how tuning can be directed to algorithm performance and/or robustness. For the survey part we establish different taxonomies to categorize tuning methods and review existing work. Finally, we elaborate on how tuning can improve methodology by facilitating well-funded experimental comparisons and algorithm analysis.
Cyber warfare: steganography vs. steganalysis For every clever method and tool being developed to hide information in multimedia data, an equal number of clever methods and tools are being developed to detect and reveal its secrets.
Efficient and reliable low-power backscatter networks There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors. This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.
Achievable Rates of Full-Duplex MIMO Radios in Fast Fading Channels With Imperfect Channel Estimation We study the theoretical performance of two full-duplex multiple-input multiple-output (MIMO) radio systems: a full-duplex bi-directional communication system and a full-duplex relay system. We focus on the effect of a (digitally manageable) residual self-interference due to imperfect channel estimation (with independent and identically distributed (i.i.d.) Gaussian channel estimation error) and transmitter noise. We assume that the instantaneous channel state information (CSI) is not available the transmitters. To maximize the system ergodic mutual information, which is a nonconvex function of power allocation vectors at the nodes, a gradient projection algorithm is developed to optimize the power allocation vectors. This algorithm exploits both spatial and temporal freedoms of the source covariance matrices of the MIMO links between transmitters and receivers to achieve higher sum ergodic mutual information. It is observed through simulations that the full-duplex mode is optimal when the nominal self-interference is low, and the half-duplex mode is optimal when the nominal self-interference is high. In addition to an exact closed-form ergodic mutual information expression, we introduce a much simpler asymptotic closed-form ergodic mutual information expression, which in turn simplifies the computation of the power allocation vectors.
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
0
Evolutionary Multitasking via Explicit Autoencoding. Evolutionary multitasking (EMT) is an emerging research topic in the field of evolutionary computation. In contrast to the traditional single-task evolutionary search, EMT conducts evolutionary search on multiple tasks simultaneously. It aims to improve convergence characteristics across multiple optimization problems at once by seamlessly transferring knowledge among them. Due to the efficacy of EMT, it has attracted lots of research attentions and several EMT algorithms have been proposed in the literature. However, existing EMT algorithms are usually based on a common mode of knowledge transfer in the form of implicit genetic transfer through chromosomal crossover. This mode cannot make use of multiple biases embedded in different evolutionary search operators, which could give better search performance when properly harnessed. Keeping this in mind, this paper proposes an EMT algorithm with explicit genetic transfer across tasks, namely EMT via autoencoding, which allows the incorporation of multiple search mechanisms with different biases in the EMT paradigm. To confirm the efficacy of the proposed EMT algorithm with explicit autoencoding, comprehensive empirical studies have been conducted on both the single- and multi-objective multitask optimization problems.
Multiobjective Optimization Models for Locating Vehicle Inspection Stations Subject to Stochastic Demand, Varying Velocity and Regional Constraints Deciding an optimal location of a transportation facility and automotive service enterprise is an interesting and important issue in the area of facility location allocation (FLA). In practice, some factors, i.e., customer demands, allocations, and locations of customers and facilities, are changing, and thus, it features with uncertainty. To account for this uncertainty, some researchers have addressed the stochastic time and cost issues of FLA. A new FLA research issue arises when decision makers want to minimize the transportation time of customers and their transportation cost while ensuring customers to arrive at their desired destination within some specific time and cost. By taking the vehicle inspection station as a typical automotive service enterprise example, this paper presents a novel stochastic multiobjective optimization to address it. This work builds two practical stochastic multiobjective programs subject to stochastic demand, varying velocity, and regional constraints. A hybrid intelligent algorithm integrating stochastic simulation and multiobjective teaching-learning-based optimization algorithm is proposed to solve the proposed programs. This approach is applied to a real-world location problem of a vehicle inspection station in Fushun, China. The results show that this is able to produce satisfactory Pareto solutions for an actual vehicle inspection station location problem.
Intrinsic dimension estimation: Advances and open problems. •The paper reviews state-of-the-art of the methods of Intrinsic Dimension (ID) Estimation.•The paper defines the properties that an ideal ID estimator should have.•The paper reviews, under the above mentioned framework, the major ID estimation methods underlining their advances and the open problems.
Alignment-Supervised Bidimensional Attention-Based Recursive Autoencoders for Bilingual Phrase Representation. Exploiting semantic interactions between the source and target linguistic items at different levels of granularity is crucial for generating compact vector representations for bilingual phrases. To achieve this, we propose alignment-supervised bidimensional attention-based recursive autoencoders (ABattRAE) in this paper. ABattRAE first individually employs two recursive autoencoders to recover hierarchical tree structures of bilingual phrase, and treats the subphrase covered by each node on the tree as a linguistic item. Unlike previous methods, ABattRAE introduces a bidimensional attention network to measure the semantic matching degree between linguistic items of different languages, which enables our model to integrate information from all nodes by dynamically assigning varying weights to their corresponding embeddings. To ensure the accuracy of the generated attention weights in the attention network, ABattRAE incorporates word alignments as supervision signals to guide the learning procedure. Using the general stochastic gradient descent algorithm, we train our model in an end-to-end fashion, where the semantic similarity of translation equivalents is maximized while the semantic similarity of nontranslation pairs is minimized. Finally, we incorporate a semantic feature based on the learned bilingual phrase representations into a machine translation system for better translation selection. Experimental results on NIST Chinese–English and WMT English–German test sets show that our model achieves substantial improvements of up to 2.86 and 1.09 BLEU points over the baseline, respectively. Extensive in-depth analyses demonstrate the superiority of our model in learning bilingual phrase embeddings.
Surrogate-Assisted Evolutionary Framework for Data-Driven Dynamic Optimization Recently, dynamic optimization has received much attention from the swarm and evolutionary computation community. However, few studies have investigated data-driven evolutionary dynamic optimization, and most algorithms for evolutionary dynamic optimization are based on analytical mathematical functions. In this paper, we investigate data-driven evolutionary dynamic optimization. First, we develop a surrogate-assisted evolutionary framework for solving data-driven dynamic optimization problems (DD-DOPs). Second, we employ a benchmark based on the typical dynamic optimization problems set in order to verify the performance of the proposed framework. The experimental results demonstrate that the proposed framework is effective for solving DD-DOPs.
Biobjective Task Scheduling for Distributed Green Data Centers The industry of data centers is the fifth largest energy consumer in the world. Distributed green data centers (DGDCs) consume 300 billion kWh per year to provide different types of heterogeneous services to global users. Users around the world bring revenue to DGDC providers according to actual quality of service (QoS) of their tasks. Their tasks are delivered to DGDCs through multiple Internet service providers (ISPs) with different bandwidth capacities and unit bandwidth price. In addition, prices of power grid, wind, and solar energy in different GDCs vary with their geographical locations. Therefore, it is highly challenging to schedule tasks among DGDCs in a high-profit and high-QoS way. This work designs a multiobjective optimization method for DGDCs to maximize the profit of DGDC providers and minimize the average task loss possibility of all applications by jointly determining the split of tasks among multiple ISPs and task service rates of each GDC. A problem is formulated and solved with a simulated-annealing-based biobjective differential evolution (SBDE) algorithm to obtain an approximate Pareto-optimal set. The method of minimum Manhattan distance is adopted to select a knee solution that specifies the Pareto-optimal task service rates and task split among ISPs for DGDCs in each time slot. Real-life data-based experiments demonstrate that the proposed method achieves lower task loss of all applications and larger profit than several existing scheduling algorithms. Note to Practitioners-This work aims to maximize the profit and minimize the task loss for DGDCs powered by renewable energy and smart grid by jointly determining the split of tasks among multiple ISPs. Existing task scheduling algorithms fail to jointly consider and optimize the profit of DGDC providers and QoS of tasks. Therefore, they fail to intelligently schedule tasks of heterogeneous applications and allocate infrastructure resources within their response time bounds. In this work, a new method that tackles drawbacks of existing algorithms is proposed. It is achieved by adopting the proposed SBDE algorithm that solves a multiobjective optimization problem. Simulation experiments demonstrate that compared with three typical task scheduling approaches, it increases profit and decreases task loss. It can be readily and easily integrated and implemented in real-life industrial DGDCs. The future work needs to investigate the real-time green energy prediction with historical data and further combine prediction and task scheduling together to achieve greener and even net-zero-energy data centers.
Scheduling Dual-Objective Stochastic Hybrid Flow Shop With Deteriorating Jobs via Bi-Population Evolutionary Algorithm Hybrid flow shop scheduling problems have gained an increasing attention in recent years because of its wide applications in real-world production systems. Most of the prior studies assume that the processing time of jobs is deterministic and constant. In practice, jobs' processing time is usually difficult to be exactly known in advance and can be influenced by many factors, e.g., machines' abrasion and jobs' feature, thereby leading to their uncertain and variable processing time. In this paper, a dual-objective stochastic hybrid flow shop deteriorating scheduling problem is presented with the goal to minimize makespan and total tardiness. In the formulated problem, the normal processing time of jobs follows a known stochastic distribution, and their actual processing time is a linear function of their start time. In order to solve it effectively, this paper develops a hybrid multiobjective optimization algorithm that maintains two populations executing the global search in the whole solution space and the local search in promising regions, respectively. An information sharing mechanism and resource allocating method are designed to enhance its exploration and exploitation ability. The simulation experiments are carried out on a set of instances, and several classical algorithms are chosen as its peers for comparison. The results demonstrate that the proposed algorithm has a great advantage in dealing with the investigated problem.
Global optimum-based search differential evolution In this paper, a global optimum-based search strategy is proposed to alleviate the situation that the differential evolution ( DE ) usually sticks into a stagnation, especially on complex problems. It aims to reconstruct the balance between exploration and exploitation, and improve the search efficiency and solution quality of DE. The proposed method is activated by recording the number of recently consecutive unsuccessful global optimum updates. It takes the feedback from the global optimum, which makes the search strategy not only refine the current solution quality, but also have a change to find other promising space with better individuals. This search strategy is incorporated with various DE mutation strategies and DE variations. The experimental results indicate that the proposed method has remarkable performance in enhancing search efficiency and improving solution quality.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
Toward a theory of intrinsically motivating instruction First, a number of previous theories of intrinsic motivation are reviewed. Then, several studies of highly motivating computer games are described. These studies focus on what makes the games fun, not on what makes them educational. Finally, with this background, a rudimentary theory of intrinsically motivating instruction is developed, based on three categories: challenge, fantasy, and curiosity. Challenge is hypothesized to depend on goals with uncertain outcomes. Several ways of making outcomes uncertain are discussed, including variable difficulty level, multiple level goals, hidden information, and randomness. Fantasy is claimed to have both cognitive and emotional advantages in designing instructional environments. A distinction is made between extrinsic fantasies that depend only weakly on the skill used in a game, and intrinsic fantasies that are intimately related to the use of the skill. Curiosity is separated into sensory and cognitive components, and it is suggested that cognitive curiosity can be aroused by making learners believe their knowledge structures are incomplete, inconsistent, or unparsimonious.
Constrained Interaction and Coordination in Proximity-Limited Multiagent Systems In this paper, we consider the problem of controlling the interactions of a group of mobile agents, subject to a set of topological constraints. Assuming proximity-limited interagent communication, we leverage mobility, unlike prior work, to enable adjacent agents to interact discriminatively, i.e., to actively retain or reject communication links on the basis of constraint satisfaction. Specifically, we propose a distributed scheme that consists of hybrid controllers with discrete switching for link discrimination, coupled with attractive and repulsive potentials fields for mobility control, where constraint violation predicates form the basis for discernment. We analyze the application of constrained interaction to two canonical coordination objectives, i.e., aggregation and dispersion, with maximum and minimum node degree constraints, respectively. For each task, we propose predicates and control potentials, and examine the dynamical properties of the resulting hybrid systems. Simulation results demonstrate the correctness of our proposed methods and the ability of our framework to generate topology-aware coordinated behavior.
Stochastic Power Adaptation with Multiagent Reinforcement Learning for Cognitive Wireless Mesh Networks As the scarce spectrum resource is becoming overcrowded, cognitive radio indicates great flexibility to improve the spectrum efficiency by opportunistically accessing the authorized frequency bands. One of the critical challenges for operating such radios in a network is how to efficiently allocate transmission powers and frequency resource among the secondary users (SUs) while satisfying the quality-of-service constraints of the primary users. In this paper, we focus on the noncooperative power allocation problem in cognitive wireless mesh networks formed by a number of clusters with the consideration of energy efficiency. Due to the SUs' dynamic and spontaneous properties, the problem is modeled as a stochastic learning process. We first extend the single-agent $(Q)$-learning to a multiuser context, and then propose a conjecture-based multiagent $(Q)$-learning algorithm to achieve the optimal transmission strategies with only private and incomplete information. An intelligent SU performs $(Q)$-function updates based on the conjecture over the other SUs' stochastic behaviors. This learning algorithm provably converges given certain restrictions that arise during the learning procedure. Simulation experiments are used to verify the performance of our algorithm and demonstrate its effectiveness of improving the energy efficiency.
Vehicular Sensing Networks in a Smart City: Principles, Technologies and Applications. Given the escalating population across the globe, it has become paramount to construct smart cities, aiming for improving the management of urban flows relying on efficient information and communication technologies (ICT). Vehicular sensing networks (VSNs) play a critical role in maintaining the efficient operation of smart cities. Naturally, there are numerous challenges to be solved before the w...
Learning Feature Recovery Transformer for Occluded Person Re-Identification One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
1.11
0.1
0.1
0.1
0.1
0.1
0.06
0.02
0
0
0
0
0
0
Age of Information in Multihop Connections With Tributary Traffic and No Preemption Age of Information (AoI) has gained significant attention from the research community because of its applications to Internet of Things (IoT) monitoring and control. In this work, we treat multihop connections over queuing networks with tributary flows and non-preemptive service: packets cannot be discarded because they are utilized for other system objectives, such as data analytics. Without preemption, the key tool for optimizing AoI is then the scheduling policy between the different data flows at each intermediate node. This is the subject of our analysis, along with the impact of packet erasure on the age. We derive upper and lower bounds for the average AoI considering several queuing policies in arbitrary network topologies, and present the results in different scenarios. Network topology, tributary traffic load, and link characteristics such as packet erasure generate complex trade-offs, which affect the optimal operation point and the age performance. The scheduling strategy at each node can also affect performance and fairness among users, particularly at critical bottleneck links, which have a significant impact on the overall performance of the whole network.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Ethical Considerations Of Applying Robots In Kindergarten Settings: Towards An Approach From A Macroperspective In child-robot interaction (cHRI) research, many studies pursue the goal to develop interactive systems that can be applied in everyday settings. For early education, increasingly, the setting of a kindergarten is targeted. However, when cHRI and research are brought into a kindergarten, a range of ethical and related procedural aspects have to be considered and dealt with. While ethical models elaborated within other human-robot interaction settings, e.g., assisted living contexts, can provide some important indicators for relevant issues, we argue that it is important to start developing a systematic approach to identify and tackle those ethical issues which rise with cHRI in kindergarten settings on a more global level and address the impact of the technology from a macroperspective beyond the effects on the individual. Based on our experience in conducting studies with children in general and pedagogical considerations on the role of the institution of kindergarten in specific, in this paper, we enfold some relevant aspects that have barely been addressed in an explicit way in current cHRI research. Four areas are analyzed and key ethical issues are identified in each area: (1) the institutional setting of a kindergarten, (2) children as a vulnerable group, (3) the caregivers' role, and (4) pedagogical concepts. With our considerations, we aim at (i) broadening the methodology of the current studies within the area of cHRI, (ii) revalidate it based on our comprehensive empirical experience with research in kindergarten settings, both laboratory and real-world contexts, and (iii) provide a framework for the development of a more systematic approach to address the ethical issues in cHRI research within kindergarten settings.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Energy-Efficient Relay-Selection-Based Dynamic Routing Algorithm for IoT-Oriented Software-Defined WSNs In this article, a dynamic routing algorithm based on energy-efficient relay selection (RS), referred to as DRA-EERS, is proposed to adapt to the higher dynamics in time-varying software-defined wireless sensor networks (SDWSNs) for the Internet-of-Things (IoT) applications. First, the time-varying features of SDWSNs are investigated from which the state-transition probability (STP) of the node is calculated based on a Markov chain. Second, a dynamic link weight is designed for DRA-EERS by incorporating both the link reward and the link cost, where the link reward is related to the link energy efficiency (EE) and the node STP, while the link cost is affected by the locations of nodes. Moreover, one adjustable coefficient is used to balance the link reward and the link cost. Finally, the energy-efficient routing problem can be formulated as an optimization problem, and DRA-EERS is performed to find the best relay according to the energy-efficient RS criteria derived from the designed link weight. The simulation results demonstrate that the path EE obtained by DRA-EERS through an available coefficient adjustment outperforms that by Dijkstra's shortest path algorithm. Again, a tradeoff between the EE and the throughput can be achieved by adjusting the coefficient of the link weight, i.e., increasing the impact of the link reward to improve the EE, and otherwise, to improve the throughput.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
A dynamic N threshold prolong lifetime method for wireless sensor nodes. Ubiquitous computing is a technology to assist many computers available around the physical environment at any place and anytime. This service tends to be invisible from users in everyday life. Ubiquitous computing uses sensors extensively to provide important information such that applications can adjust their behavior. A Wireless Sensor Network (WSN) has been applied to implement such an architecture. To ensure continuous service, a dynamic N threshold power saving method for WSN is developed. A threshold N has been derived to obtain minimum power consumption for the sensor node while considering each different data arrival rate. We proposed a theoretical analysis regarding the probability variation for each state considering different arrival rate, service rate and collision probability. Several experiments have been conducted to demonstrate the effectiveness of our research. Our method can be applied to prolong the service time of a ubiquitous computing network to cope with the network disconnection issue.
Fuzzy Mathematical Programming and Self-Adaptive Artificial Fish Swarm Algorithm for Just-in-Time Energy-Aware Flow Shop Scheduling Problem With Outsourcing Option Flow shop scheduling (FSS) problem constitutes a major part of production planning in every manufacturing organization. It aims at determining the optimal sequence of processing jobs on available machines within a given customer order. In this article, a novel biobjective mixed-integer linear programming (MILP) model is proposed for FSS with an outsourcing option and just-in-time delivery in order to simultaneously minimize the total cost of the production system and total energy consumption. Each job is considered to be either scheduled in-house or to be outsourced to one of the possible subcontractors. To efficiently solve the problem, a hybrid technique is proposed based on an interactive fuzzy solution technique and a self-adaptive artificial fish swarm algorithm (SAAFSA). The proposed model is treated as a single objective MILP using a multiobjective fuzzy mathematical programming technique based on the ε-constraint, and SAAFSA is then applied to provide Pareto optimal solutions. The obtained results demonstrate the usefulness of the suggested methodology and high efficiency of the algorithm in comparison with CPLEX solver in different problem instances. Finally, a sensitivity analysis is implemented on the main parameters to study the behavior of the objectives according to the real-world conditions.
Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks Federated learning (FL) is a promising solution to privacy preservation for data-driven deep learning approaches. However, enabling FL in unmanned aerial vehicle (UAV)-assisted wireless networks is still challenging due to limited resources and battery capacity in the UAV and user devices. In this regard, we propose a deep reinforcement learning (DRL)-based framework for joint UAV placement and re...
Energy-Efficient Optimization for Wireless Information and Power Transfer in Large-Scale MIMO Systems Employing Energy Beamforming In this letter, we consider a large-scale multiple-input multiple-output (MIMO) system where the receiver should harvest energy from the transmitter by wireless power transfer to support its wireless information transmission. The energy beamforming in the large-scale MIMO system is utilized to address the challenging problem of long-distance wireless power transfer. Furthermore, considering the limitation of the power in such a system, this letter focuses on the maximization of the energy efficiency of information transmission (bit per Joule) while satisfying the quality-of-service (QoS) requirement, i.e. delay constraint, by jointly optimizing transfer duration and transmit power. By solving the optimization problem, we derive an energy-efficient resource allocation scheme. Numerical results validate the effectiveness of the proposed scheme.
Accurate Self-Localization in RFID Tag Information Grids Using FIR Filtering Grid navigation spaces nested with the radio-frequency identification (RFID) tags are promising for industrial and other needs, because each tag can deliver information about a local two-dimensional or three-dimensional surrounding. The approach, however, requires high accuracy in vehicle self-localization. Otherwise, errors may lead to collisions; possibly even fatal. We propose a new extended finite impulse response (EFIR) filtering algorithm and show that it meets this need. The EFIR filter requires an optimal averaging interval, but does not involve the noise statistics which are often not well known to the engineer. It is more accurate than the extended Kalman filter (EKF) under real operation conditions and its iterative algorithm has the Kalman form. Better performance of the proposed EFIR filter is demonstrated based on extensive simulations in a comparison to EKF, which is widely used in RFID tag grids. We also show that errors in noise covariances may provoke divergence in EKF, whereas the EFIR filter remains stable and is thus more robust.
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Supporting social navigation on the World Wide Web This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out of information using various communication tools. We call this behavior social navigation as it is based on communication and interaction with other users, be that through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like Web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be better supported in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a Web client. The other system is a prototype of a Web- hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.
Proofs of Storage from Homomorphic Identification Protocols Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where `tags' on multiple messages can be homomorphically combined to yield a `tag' on any linear combination of these messages. We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. In this paper, a novel approach based on the Q-learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q-learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network. In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F-score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
Stochastic Geometry for Modeling, Analysis, and Design of Multi-Tier and Cognitive Cellular Wireless Networks: A Survey. For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given...
On distances in uniformly random networks The distribution of Euclidean distances in Poisson point processes is determined. The main result is the density function of the distance to the n-nearest neighbor of a homogeneous process in Ropfm, which is shown to be governed by a generalized Gamma distribution. The result has many implications for large wireless networks of randomly distributed nodes
Artificial Intelligence Enabled Wireless Networking for 5G and Beyond: Recent Advances and Future Challenges 5G wireless communication networks are currently being deployed, and B5G networks are expected to be developed over the next decade. AI technologies and, in particular, ML have the potential to efficiently solve the unstructured and seemingly intractable problems by involving large amounts of data that need to be dealt with in B5G. This article studies how AI and ML can be leveraged for the design and operation of B5G networks. We first provide a comprehensive survey of recent advances and future challenges that result from bringing AI/ML technologies into B5G wireless networks. Our survey touches on different aspects of wireless network design and optimization, including channel measurements, modeling, and estimation, physical layer research, and network management and optimization. Then ML algorithms and applications to B5G networks are reviewed, followed by an overview of standard developments of applying AI/ML algorithms to B5G networks. We conclude this study with future challenges on applying AI/ML to B5G networks.
Federated Machine Learning: Concept and Applications. Today’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated-learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federated learning, and federated transfer learning. We provide definitions, architectures, and applications for the federated-learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allowing knowledge to be shared without compromising user privacy.
Federated Learning over Wireless Fading Channels We study federated machine learning at the wireless network edge, where limited power wireless devices, each with its own dataset, build a joint model with the help of a remote parameter server (PS). We consider a bandwidth-limited fading multiple access channel (MAC) from the wireless devices to the PS, and propose various techniques to implement distributed stochastic gradient descent (DSGD) ove...
Fuzzy logic in control systems: fuzzy logic controller. I.
Robust Indoor Positioning Provided by Real-Time RSSI Values in Unmodified WLAN Networks The positioning methods based on received signal strength (RSS) measurements, link the RSS values to the position of the mobile station(MS) to be located. Their accuracy depends on the suitability of the propagation models used for the actual propagation conditions. In indoor wireless networks, these propagation conditions are very difficult to predict due to the unwieldy and dynamic nature of the RSS. In this paper, we present a novel method which dynamically estimates the propagation models that best fit the propagation environments, by using only RSS measurements obtained in real time. This method is based on maximizing compatibility of the MS to access points (AP) distance estimates. Once the propagation models are estimated in real time, it is possible to accurately determine the distance between the MS and each AP. By means of these distance estimates, the location of the MS can be obtained by trilateration. The method proposed coupled with simulations and measurements in a real indoor environment, demonstrates its feasibility and suitability, since it outperforms conventional RSS-based indoor location methods without using any radio map information nor a calibration stage.
New approach using ant colony optimization with ant set partition for fuzzy control design applied to the ball and beam system. In this paper we describe the design of a fuzzy logic controller for the ball and beam system using a modified Ant Colony Optimization (ACO) method for optimizing the type of membership functions, the parameters of the membership functions and the fuzzy rules. This is achieved by applying a systematic and hierarchical optimization approach modifying the conventional ACO algorithm using an ant set partition strategy. The simulation results show that the proposed algorithm achieves better results than the classical ACO algorithm for the design of the fuzzy controller.
Integrating structured biological data by Kernel Maximum Mean Discrepancy Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: kb@dbs.ifi.lmu.de
Noninterference for a Practical DIFC-Based Operating System The Flume system is an implementation of decentralized information flow control (DIFC) at the operating system level. Prior work has shown Flume can be implemented as a practical extension to the Linux operating system, allowing real Web applications to achieve useful security guarantees. However, the question remains if the Flume system is actually secure. This paper compares Flume with other recent DIFC systems like Asbestos, arguing that the latter is inherently susceptible to certain wide-bandwidth covert channels, and proving their absence in Flume by means of a noninterference proof in the communicating sequential processes formalism.
A three-network architecture for on-line learning and optimization based on adaptive dynamic programming In this paper, we propose a novel adaptive dynamic programming (ADP) architecture with three networks, an action network, a critic network, and a reference network, to develop internal goal-representation for online learning and optimization. Unlike the traditional ADP design normally with an action network and a critic network, our approach integrates the third network, a reference network, into the actor-critic design framework to automatically and adaptively build an internal reinforcement signal to facilitate learning and optimization overtime to accomplish goals. We present the detailed design architecture and its associated learning algorithm to explain how effective learning and optimization can be achieved in this new ADP architecture. Furthermore, we test the performance of our architecture both on the cart-pole balancing task and the triple-link inverted pendulum balancing task, which are the popular benchmarks in the community to demonstrate its learning and control performance over time.
Internet of Things for Smart Cities The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.1
0.022222
0.004878
0
0
0
0
0
0
0
0
0
Generating Sentences from a Continuous Space. The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.
An intelligent analyzer and understander of English The paper describes a working analysis and generation program for natural language, which handles paragraph length input. Its core is a system of preferential choice between deep semantic patterns, based on what we call “semantic density.” The system is contrasted:with syntax oriented linguistic approaches, and with theorem proving approaches to the understanding problem.
Hitting the right paraphrases in good time We present a random-walk-based approach to learning paraphrases from bilingual parallel corpora. The corpora are represented as a graph in which a node corresponds to a phrase, and an edge exists between two nodes if their corresponding phrases are aligned in a phrase table. We sample random walks to compute the average number of steps it takes to reach a ranking of paraphrases with better ones being "closer" to a phrase of interest. This approach allows "feature" nodes that represent domain knowledge to be built into the graph, and incorporates truncation techniques to prevent the graph from growing too large for efficiency. Current approaches, by contrast, implicitly presuppose the graph to be bipartite, are limited to finding paraphrases that are of length two away from a phrase, and do not generally permit easy incorporation of domain knowledge. Manual evaluation of generated output shows that our approach outperforms the state-of-the-art system of Callison-Burch (2008).
Re-examining machine translation metrics for paraphrase identification We propose to re-examine the hypothesis that automated metrics developed for MT evaluation can prove useful for paraphrase identification in light of the significant work on the development of new MT metrics over the last 4 years. We show that a meta-classifier trained using nothing but recent MT metrics outperforms all previous paraphrase identification approaches on the Microsoft Research Paraphrase corpus. In addition, we apply our system to a second corpus developed for the task of plagiarism detection and obtain extremely positive results. Finally, we conduct extensive error analysis and uncover the top systematic sources of error for a paraphrase identification approach relying solely on MT metrics. We release both the new dataset and the error analysis annotations for use by the community.
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. Current end-to-end machine reading and question answering (Qu0026A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Qu0026A model that does not require recurrent networks: It consists exclusively of attention and convolutions, yet achieves equivalent or better performance than existing models. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. This data augmentation technique not only enhances the training examples but also diversifies the phrasing of the sentences, which results in immediate accuracy improvements. Our single model achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.
Integrating Transformer and Paraphrase Rules for Sentence Simplification. Sentence simplification aims to reduce the complexity of a sentence while retaining its original meaning. Current models for sentence simplification adopted ideas from ma- chine translation studies and implicitly learned simplification mapping rules from normal- simple sentence pairs. In this paper, we explore a novel model based on a multi-layer and multi-head attention architecture and we pro- pose two innovative approaches to integrate the Simple PPDB (A Paraphrase Database for Simplification), an external paraphrase knowledge base for simplification that covers a wide range of real-world simplification rules. The experiments show that the integration provides two major benefits: (1) the integrated model outperforms multiple state- of-the-art baseline models for sentence simplification in the literature (2) through analysis of the rule utilization, the model seeks to select more accurate simplification rules. The code and models used in the paper are available at this https URL Sanqiang/text_simplification.
Get To The Point: Summarization With Pointer-Generator Networks Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
Deep contextualized word representations. We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals.
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
An effective implementation of the Lin–Kernighan traveling salesman heuristic This paper describes an implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP). Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 13,509-city problem (the largest non-trivial problem instance solved to optimality today).
Node Reclamation and Replacement for Long-Lived Sensor Networks When deployed for long-term tasks, the energy required to support sensor nodes' activities is far more than the energy that can be preloaded in their batteries. No matter how the battery energy is conserved, once the energy is used up, the network life terminates. Therefore, guaranteeing long-term energy supply has persisted as a big challenge. To address this problem, we propose a node reclamation and replacement (NRR) strategy, with which a mobile robot or human labor called mobile repairman (MR) periodically traverses the sensor network, reclaims nodes with low or no power supply, replaces them with fully charged ones, and brings the reclaimed nodes back to an energy station for recharging. To effectively and efficiently realize the strategy, we develop an adaptive rendezvous-based two-tier scheduling scheme (ARTS) to schedule the replacement/reclamation activities of the MR and the duty cycles of nodes. Extensive simulations have been conducted to verify the effectiveness and efficiency of the ARTS scheme.
Internet of Things: A Survey on Enabling Technologies, Protocols and Applications This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services.
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey. This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.056027
0.05
0.05
0.05
0.05
0.05
0.020107
0.001188
0
0
0
0
0
0
A Novel Scheme for Tour Planning of Mobile Sink in Wireless Sensor Networks Exploiting mobile sink (MS) for data gathering in the wireless sensor networks has been extensively studied in the recent researches to address energy-hole issues, thereby facilitating balanced energy consumption among nodes and so prolonging network lifetime. However, such approaches suffer from an extended data collection delay causing buffer overflow problem. In this regard, finding the optimal number of locations (i.e. rendezvous points (RPs) where the MS sojourns for data collection), is not only of utmost importance, but also a challenging task. A novel scheme for trajectory design of MS for data collection is presented in this study. The authors' primary goal is to optimise the number of RPs and their locations to minimise the travelling length of the MS. First, they reduced the problem size by using a combination of breadth-first search and Tarjan's algorithm and then applied spectral clustering to find the optimal set of RPs to plan the tour for the MS. They have performed extensive simulations, and the results are compared with relevant existing schemes. The comparative results confirm the effectiveness of their approach in terms of the number of RPs, path length, the variance of RPs, and energy consumption per round.
Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications FSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.
A dynamic N threshold prolong lifetime method for wireless sensor nodes. Ubiquitous computing is a technology to assist many computers available around the physical environment at any place and anytime. This service tends to be invisible from users in everyday life. Ubiquitous computing uses sensors extensively to provide important information such that applications can adjust their behavior. A Wireless Sensor Network (WSN) has been applied to implement such an architecture. To ensure continuous service, a dynamic N threshold power saving method for WSN is developed. A threshold N has been derived to obtain minimum power consumption for the sensor node while considering each different data arrival rate. We proposed a theoretical analysis regarding the probability variation for each state considering different arrival rate, service rate and collision probability. Several experiments have been conducted to demonstrate the effectiveness of our research. Our method can be applied to prolong the service time of a ubiquitous computing network to cope with the network disconnection issue.
On Theoretical Modeling of Sensor Cloud: A Paradigm Shift From Wireless Sensor Network. This paper focuses on the theoretical modeling of sensor cloud, which is one of the first attempts in this direction. We endeavor to theoretically characterize virtualization, which is a fundamental mechanism for operations within the sensor-cloud architecture. Existing related research works on sensor cloud have primarily focused on the ideology and the challenges that wireless sensor network (WS...
Big Data Cleaning Based on Mobile Edge Computing in Industrial Sensor-Cloud. With the advent of 5G, the industrial Internet of Things has developed rapidly. The industrial sensor-cloud system (SCS) has also received widespread attention. In the future, a large number of integrated sensors that simultaneously collect multifeature data will be added to industrial SCS. However, the collected big data are not trustworthy due to the harsh environment of the sensor. If the data ...
Trajectory Design for UAV-Enabled Multiuser Wireless Power Transfer With Nonlinear Energy Harvesting In this paper, we study an unmanned aerial vehicle (UAV)-enabled multiuser wireless power transfer (WPT) network, where a UAV is responsible for providing wireless energy for a set of ground devices (GDs) deployed in an area. We focus on the design of UAV trajectory subject to the maximum flight speed limit, in order to maximize the minimum harvested energy among GDs over a particular charging duration. Different from prior works that considered simplified linear energy harvesting models, this paper for the first time takes into account the realistic nonlinear energy harvesting model for the UAV trajectory design. However, the formulated trajectory design problem is highly non-convex and has infinite number of variables, thus making it be challenging to be solved optimally. To tackle this difficulty, we adopt the following three-step approach to obtain an efficient solution. First, we rigorously characterize that the optimal trajectory follows a new successive-hover-and-fly (SHF) structure, where the UAV hovers at a certain set of points for efficiently transferring energy, and flies among these hovering points with the maximum speed following certain arcs (not necessarily straight lines). Next, based on this SHF structure, we transform the original problem to a new one for finding a set of turning point variables during the maximum-speed flight, at which the UAV changes the flight direction without hovering. Finally, we use the techniques of convex approximation to solve the transformed problem. According to the convexity of the nonlinear energy harvesting model, we iteratively solve a series of convex optimization problems to update the UAV trajectory towards a high-quality solution. Numerical results show the convergence of the proposed approach, and validate its performance gain over conventional designs.
Delay-Aware Green Routing for Mobile-Sink-Based Wireless Sensor Networks Mobile sinks were introduced in wireless sensor networks (WSNs) to mitigate the infamous hotspot problem. However, routing in mobile-sink-based WSNs requires frequent updation of sink location information to all the sensor nodes; which is an energy-expensive process for resource-constrained WSNs. Therefore, it is required to develop a green routing protocol that can minimize the energy overhead in sink location updation as well as reduce the data delivery delay. This article proposes a virtual-infrastructure-based delay-aware green routing protocol (DGRP) that creates multiple rings in the sensor field and limits the updation of mobile sink location information to the nodes belonging to the rings only. Simulation results show that DGRP outperforms existing routing protocols in terms of energy consumption and throughput. In addition to this, DGRP results in $\approx 26$ %, $\approx 39$ %, and $\approx 35$ % improvement in data delivery delay for a varying number of sensor nodes, sink speeds, and network sizes, respectively, when compared with the state of the art.
JPEG Error Analysis and Its Applications to Digital Image Forensics JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.
Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.
Statistical tools for digital forensics A digitally altered photograph, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic photograph. As a result, photographs no longer hold the unique stature as a definitive recording of events. We describe several statistical techniques for detecting traces of digital tampering in the absence of any digital watermark or signature. In particular, we quantify statistical correlations that result from specific forms of digital tampering, and devise detection schemes to reveal these correlations.
End-user programming architecture facilitates the uptake of robots in social therapies. This paper proposes an architecture that makes programming of robot behavior of an arbitrary complexity possible for end-users and shows the technical solutions in a way that is easy to understand and generalize to different situations. It aims to facilitate the uptake and actual use of robot technologies in therapies for training social skills to autistic children. However, the framework is easy to generalize for an arbitrary human–robot interaction application, where users with no technical background need to program robots, i.e. in various assistive robotics applications. We identified the main needs of end-user programming of robots as a basic prerequisite for the uptake of robots in assistive applications. These are reusability, modularity, affordances for natural interaction and the ease of use. After reviewing the shortcomings of the existing architectures, we developed an initial architecture according to these principles and embedded it in a robot platform. Further, we used a co-creation process to develop and concretize the architecture to facilitate solutions and create affordances for robot specialists and therapists. Several pilot tests showed that different user groups, including therapists with general computer skills and adolescents with autism could make simple training or general behavioral scenarios within 1 h, by connecting existing behavioral blocks and by typing textual robot commands for fine-tuning the behaviors. In addition, this paper explains the basic concepts behind the TiViPE based robot control platform, and gives guidelines for choosing the robot programming tool and designing end-user platforms for robots.
A review on interval type-2 fuzzy logic applications in intelligent control. A review of the applications of interval type-2 fuzzy logic in intelligent control has been considered in this paper. The fundamental focus of the paper is based on the basic reasons for using type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques.
Flymap: Interacting With Maps Projected From A Drone Interactive maps have become ubiquitous in our daily lives, helping us reach destinations and discovering our surroundings. Yet, designing map interactions is not straightforward and depends on the device being used. As mobile devices evolve and become independent from users, such as with robots and drones, how will we interact with the maps they provide? We propose FlyMap as a novel user experience for drone-based interactive maps. We designed and developed three interaction techniques for FlyMap's usage scenarios. In a comprehensive indoor study (N = 16), we show the strengths and weaknesses of two techniques on users' cognition, task load, and satisfaction. FlyMap was then pilot tested with the third technique outdoors in real world conditions with four groups of participants (N = 13). We show that FlyMap's interactivity is exciting to users and opens the space for more direct interactions with drones.
Intention-detection strategies for upper limb exosuits: model-based myoelectric vs dynamic-based control The cognitive human-robot interaction between an exosuit and its wearer plays a key role in determining both the biomechanical effects of the device on movements and its perceived effectiveness. There is a lack of evidence, however, on the comparative performance of different control methods, implemented on the same device. Here, we compare two different control approaches on the same robotic suit: a model-based myoelectric control (myoprocessor), which estimates the joint torque from the activation of target muscles, and a dynamic-based control that provides support against gravity using an inverse dynamic model. Tested on a cohort of four healthy participants, assistance from the exosuit results in a marked reduction in the effort of muscles working against gravity with both control approaches (peak reduction of 68.6±18.8%, for the dynamic arm model and 62.4±25.1% for the myoprocessor), when compared to an unpowered condition. Neither of the two controllers had an affect on the performance of their users in a joint-angle tracking task (peak errors of 15.4° and 16.4° for the dynamic arm model and myoprocessor, respectively, compared to 13.1o in the unpowered condition). However, our results highlight the remarkable adaptability of the myoprocessor to seamlessly adapt to changing external dynamics.
1.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
Input-to-state stability of switched systems and switching adaptive control In this paper we prove that a switched nonlinear system has several useful input-to-state stable (ISS)-type properties under average dwell-time switching signals if each constituent dynamical system is ISS. This extends available results for switched linear systems. We apply our result to stabilization of uncertain nonlinear systems via switching supervisory control, and show that the plant states can be kept bounded in the presence of bounded disturbances when the candidate controllers provide ISS properties with respect to the estimation errors. Detailed illustrative examples are included.
Discrete-Time Switched Linear Systems State Feedback Design With Application to Networked Control This technical note addresses the state feedback switched control design problem for discrete-time switched linear systems. More specifically, the control goal is to jointly design a set of state feedback gains and a state dependent switching function, ensuring H2 and H∞ guaranteed performance. The conditions are based on Lyapunov or Riccati-Metzler inequalities, which allow the derivation of simpler alternative conditions that are expressed as LMIs whenever a scalar variable is fixed. The theoretical results are well adapted to deal with the self-triggered control design problem, where the switching rule is responsible for the scheduling of multiple sampling periods, to be considered in the communication channel in order to improve performance. This method is compared to others from the literature. Examples show the validity of the proposed technique in both contexts, switched and networked control systems.
Dynamic output feedback for switched linear systems based on a LQG design The aim of this paper is to extend the LQG design for linear system to the case of switched linear systems in continuous time. The main result provides a control Lyapunov function and a dynamic output feedback law leading to sub-optimal solutions. Practically, the dynamic output feedback is easy to apply and the design procedure is effective if there exists at least one controllable and observable convex combination of the subsystems. Practical applications concern the large class of power converters.
Dissipativity-Based Filtering for Fuzzy Switched Systems With Stochastic Perturbation. In this technical note, the problem of the dissipativity-based filtering problem is considered for a class of T-S fuzzy switched systems with stochastic perturbation. Firstly, a sufficient condition of strict dissipativity performance is given to guarantee the mean-square exponential stability for the concerned T-S fuzzy switched system. Then, our attention is focused on the design of a filter to the T-S fuzzy switched system with Brownian motion. By combining the average dwell time technique with the piecewise Lyapunov function technique, the desired fuzzy filters are designed that guarantee the filter error dynamic system to be mean-square exponential stable with a strictly dissipative performance, and the corresponding solvability condition for the fuzzy filter is also presented based on the linearization procedure approach. Finally, an example is provided to illustrate the effectiveness of the proposed dissipativity-based filter technique.
A simple approach for switched control design with control bumps limitation. By its own nature, control of switched systems in general leads to expressive discontinuities at switching times. Hence, this class of dynamic systems needs additional care as far as implementation constraints such as for instance control amplitude limitation is concerned. This paper aims at providing numerically tractable conditions to be incorporated in the control design procedure in order to reduce control bumps. The switching strategy and continuous control laws are jointly determined as well as an H∞ guaranteed cost is minimized. Due to its theoretical and practical importance, special attention is given to the dynamic output feedback control design problem. The results are illustrated by means of examples borrowed from the literature which are also used for comparisons that put in evidence the efficiency of the proposed strategy.
Stability and Stabilization of Switched Linear Systems With Mode-Dependent Average Dwell Time. In this paper, the stability and stabilization problems for a class of switched linear systems with mode-dependent average dwell time (MDADT) are investigated in both continuous-time and discrete-time contexts. The proposed switching law is more applicable in practice than the average dwell time (ADT) switching in which each mode in the underlying system has its own ADT. The stability criteria for switched systems with MDADT in nonlinear setting are firstly derived, by which the conditions for stability and stabilization for linear systems are also presented. A numerical example is given to show the validity and potential of the developed techniques. © 2011 IEEE.
A survey on industrial applications of fuzzy control Fuzzy control has long been applied to industry with several important theoretical results and successful results. Originally introduced as model-free control design approach, model-based fuzzy control has gained widespread significance in the past decade. This paper presents a survey on recent developments of analysis and design of fuzzy control systems focused on industrial applications reported after 2000.
A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems. Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.
Semantic Image Synthesis With Spatially-Adaptive Normalization We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal as the normalization layers tend to "wash away" semantic information. To address the issue, we propose using the input layout. for modulating the activations in normalization layers through a spatially-adaptive,learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method over existing approaches, regarding both visual fidelity and align-ment with input layouts. Finally, our model allows user control over both semantic and style as synthesizing images.
Social navigation support in a course recommendation system The volume of course-related information available to students is rapidly increasing. This abundance of information has created the need to help students find, organize, and use resources that match their individual goals, interests, and current knowledge. Our system, CourseAgent, presented in this paper, is an adaptive community-based hypermedia system, which provides social navigation course recommendations based on students’ assessment of course relevance to their career goals. CourseAgent obtains students’ explicit feedback as part of their natural interactivity with the system. This work presents our approach to eliciting explicit student feedback and then evaluates this approach.
Adaptive Learning in Tracking Control Based on the Dual Critic Network Design. In this paper, we present a new adaptive dynamic programming approach by integrating a reference network that provides an internal goal representation to help the systems learning and optimization. Specifically, we build the reference network on top of the critic network to form a dual critic network design that contains the detailed internal goal representation to help approximate the value funct...
Adaptive dynamic surface control of a class of nonlinear systems with unknown direction control gains and input saturation. In this paper, adaptive neural network based dynamic surface control (DSC) is developed for a class of nonlinear strict-feedback systems with unknown direction control gains and input saturation. A Gaussian error function based saturation model is employed such that the backstepping technique can be used in the control design. The explosion of complexity in traditional backstepping design is avoided by utilizing DSC. Based on backstepping combined with DSC, adaptive radial basis function neural network control is developed to guarantee that all the signals in the closed-loop system are globally bounded, and the tracking error converges to a small neighborhood of origin by appropriately choosing design parameters. Simulation results demonstrate the effectiveness of the proposed approach and the good performance is guaranteed even though both the saturation constraints and the wrong control direction are occurred.
Collective feature selection to identify crucial epistatic variants. In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.200862
0.200862
0.200862
0.100857
0.066954
0.008537
0.000144
0.000001
0
0
0
0
0
0
Towards Detection of Morphed Face Images in Electronic Travel Documents The vulnerability of face recognition systems to attacks based on morphed biometric samples has been established in the recent past. Such attacks pose a severe security threat to a biometric recognition system in particular within the widely deployed border control applications. However, so far a reliable detection of morphed images has remained an unsolved research challenge. In this work, automated morph detection algorithms based on general purpose pattern recognition algorithms are benchmarked for two scenarios relevant in the context of fraud detection for electronic travel documents, i.e. single image (no-reference) and image pair (differential) morph detection. In the latter scenario a trusted live capture from an authentication attempt serves as additional source of information and, hence, the difference between features obtained from this face image and a potential morph can be estimated. A dataset of 2,206 ICAO compliant bona fide face images of the FRGCv2 face database is used to automatically generate 4,808 morphs. It is shown that in a differential scenario morph detectors which utilize a score level-based fusion of detection scores obtained from a single image and differences between image pairs generally outperform no-reference morph detectors with regard to the employed algorithms and used parameters. On average a relative improvement of more than 25% in terms of detection equal error rate is achieved.
Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition Multispectral palmprint is considered as an effective biometric modality to accurately recognize a subject with high confidence. This paper presents a novel multispectral palmprint recognition system consisting of three functional blocks namely: (1) novel technique to extract Region of Interest (ROI) from the hand images acquired using a contact less sensor (2) novel image fusion scheme based on dependency measure (3) new scheme for feature extraction and classification. The proposed ROI extraction scheme is based on locating the valley regions between fingers irrespective of the hand pose. We then propose a novel image fusion scheme that combines information from different spectral bands using a Wavelet transform from various sub-bands. We then perform the statistical dependency analysis between these sub-bands to perform fusion either by selection or by weighted fusion. To effectively process the information from the fused image, we perform feature extraction using Log-Gabor transform whose feature dimension is reduced using Kernel Discriminant Analysis (KDA) before performing the classification by employing a Sparse Representation Classifier (SRC). Extensive experiments are carried out on a CASIA multispectral palmprint database that shows the strong superiority of our proposed fusion scheme when benchmarked with contemporary state-of-the-art image fusion schemes.
Robust Morph-Detection at Automated Border Control Gate Using Deep Decomposed 3D Shape & Diffuse Reflectance Face recognition is widely employed in Automated Border Control (ABC) gates, which verify the face image on passport or electronic Machine Readable Travel Document (eMTRD) against the captured image to confirm the identity of the passport holder. In this paper, we present a robust morph detection algorithm that is based on differential morph detection. The proposed method decomposes the bona fide image captured from the ABC gate and the digital face image extracted from the eMRTD into the diffuse reconstructed image and a quantized normal map. The extracted features are further used to learn a linear classifier (SVM) to detect a morphing attack based on the assessment of differences between the bona fide image from the ABC gate and the digital face image extracted from the passport. Owing to the availability of multiple cameras within an ABC gate, we extend the proposed method to fuse the classification scores to generate the final decision on morph-attack-detection. To validate our proposed algorithm, we create a morph attack database with overall 588 images, where bona fide are captured in an indoor lighting environment with a Canon DSLR Camera with one sample per subject and correspondingly images from ABC gates. We benchmark our proposed method with the existing state-of-the-art and can state that the new approach significantly outperforms previous approaches in the ABC gate scenario.
MIPGAN—Generating Strong and High Quality Morphing Attacks Using Identity Prior Driven GAN Face morphing attacks target to circumvent Face Recognition Systems (FRS) by employing face images derived from multiple data subjects (e.g., accomplices and malicious actors). Morphed images can be verified against contributing data subjects with a reasonable success rate, given they have a high degree of facial resemblance. The success of morphing attacks is directly dependent on the quality of ...
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Constrained Kalman filtering for indoor localization of transport vehicles using floor-installed HF RFID transponders Localization of transport vehicles is an important issue for many intralogistics applications. The paper presents an inexpensive solution for indoor localization of vehicles. Global localization is realized by detection of RFID transponders, which are integrated in the floor. The paper presents a novel algorithm for fusing RFID readings with odometry using Constraint Kalman filtering. The paper presents experimental results with a Mecanum based omnidirectional vehicle on a NaviFloor® installation, which includes passive HF RFID transponders. The experiments show that the proposed Constraint Kalman filter provides a similar localization accuracy compared to a Particle filter but with much lower computational expense.
General Inner Approximation Algorithm For Non-Convex Mathematical Programs Inner approximation algorithms have had two major roles in the mathematical programming literature. Their first role was in the construction of algorithms for the decomposition of large-scale mathe...
Protecting privacy using the decentralized label model Stronger protection is needed for the confidentiality and integrity of data, because programs containing untrusted code are the rule rather than the exception. Information flow control allows the enforcement of end-to-end security policies, but has been difficult to put into practice. This article describes the decentralized label model, a new label model for control of information flow in systems with mutual distrust and decentralized authority. The model improves on existing multilevel security models by allowing users to declassify information in a decentralized way, and by improving support for fine-grained data sharing. It supports static program analysis of information flow, so that programs can be certified to permit only acceptable information flows, while largely avoiding the overhead of run-time checking. The article introduces the language Jif, an extension to Java that provides static checking of information flow using the decentralized label model.
Hitting the right paraphrases in good time We present a random-walk-based approach to learning paraphrases from bilingual parallel corpora. The corpora are represented as a graph in which a node corresponds to a phrase, and an edge exists between two nodes if their corresponding phrases are aligned in a phrase table. We sample random walks to compute the average number of steps it takes to reach a ranking of paraphrases with better ones being "closer" to a phrase of interest. This approach allows "feature" nodes that represent domain knowledge to be built into the graph, and incorporates truncation techniques to prevent the graph from growing too large for efficiency. Current approaches, by contrast, implicitly presuppose the graph to be bipartite, are limited to finding paraphrases that are of length two away from a phrase, and do not generally permit easy incorporation of domain knowledge. Manual evaluation of generated output shows that our approach outperforms the state-of-the-art system of Callison-Burch (2008).
High delivery rate position-based routing algorithms for 3D ad hoc networks Position-based routing algorithms use the geographic position of the nodes in a network to make the forwarding decisions. Recent research in this field primarily addresses such routing algorithms in two dimensional (2D) space. However, in real applications, nodes may be distributed in three dimensional (3D) environments. In this paper, we propose several randomized position-based routing algorithms and their combination with restricted directional flooding-based algorithms for routing in 3D environments. The first group of algorithms AB3D are extensions of previous randomized routing algorithms from 2D space to 3D space. The second group ABLAR chooses m neighbors according to a space-partition heuristic and forwards the message to all these nodes. The third group T-ABLAR-T uses progress-based routing until a local minimum is reached. The algorithm then switches to ABLAR for one step after which the algorithm switches back to the progress-based algorithm again. The fourth group AB3D-ABLAR uses an algorithm from the AB3D group until a threshold is passed in terms of number of hops. The algorithm then switches to an ABLAR algorithm. The algorithms are evaluated and compared with current routing algorithms. The simulation results on unit disk graphs (UDG) show a significant improvement in delivery rate (up to 99%) and a large reduction of the traffic.
Global Adaptive Dynamic Programming for Continuous-Time Nonlinear Systems This paper presents a novel method of global adaptive dynamic programming (ADP) for the adaptive optimal control of nonlinear polynomial systems. The strategy consists of relaxing the problem of solving the Hamilton-Jacobi-Bellman (HJB) equation to an optimization problem, which is solved via a new policy iteration method. The proposed method distinguishes from previously known nonlinear ADP methods in that the neural network approximation is avoided, giving rise to signicant computational improvement. Instead of semiglobally or locally stabilizing, the resultant control policy is globally stabilizing for a general class of nonlinear polynomial systems. Furthermore, in the absence of the a priori knowledge of the system dynamics, an online learning method is devised to implement the proposed policy iteration technique by generalizing the current ADP theory. Finally, three numerical examples are provided to validate the effectiveness of the proposed method.
Quaternion polar harmonic Fourier moments for color images. •Quaternion polar harmonic Fourier moments (QPHFM) is proposed.•Complex Chebyshev-Fourier moments (CHFM) is extended to quaternion QCHFM.•Comparison experiments between QPHFM and QZM, QPZM, QOFMM, QCHFM and QRHFM are conducted.•QPHFM performs superbly in image reconstruction and invariant object recognition.•The importance of phase information of QPHFM in image reconstruction are discussed.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
0
0
Multimodal biometric recognition using human ear and palmprint. Combining multiple human trait features is a proven and effective strategy for biometric-based personal identification. In this study, the authors investigate the fusion of two biometric modalities, i.e. ear and palmprint, at feature-level. Ear and palmprint patterns are characterised by a rich and stable structure, which provides a large amount of information to discriminate individuals. Local te...
On ear-based human identification in the mid-wave infrared spectrum In this paper the problem of human ear recognition in the Mid-wave infrared (MWIR) spectrum is studied in order to illustrate the advantages and limitations of the ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible (baseline) and mid-wave IR left and right profile face images. Profile face images were collected using a high definition mid-wave IR camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based, ear recognition system is proposed that is designed and developed to perform real-time human identification. The proposed system tests several feature extraction methods, namely: (i) intensity-based such as independent component analysis (ICA), principal component analysis (PCA), and linear discriminant analysis (LDA); (ii) shape-based such as scale invariant feature transform (SIFT); as well as (iii) texture-based such as local binary patterns (LBP), and local ternary patterns (LTP). Experimental results suggest that LTP (followed by LBP) yields the best performance (Rank1=80:68%) on manually segmented ears and (Rank1=68:18%) on ear images that are automatically detected and segmented. By fusing the matching scores obtained by LBP and LTP, the identification performance increases by about 5%. Although these results are promising, the outcomes of our study suggest that the design and development of automated ear-based recognition systems that can operate efficiently in the lower part of the passive IR spectrum are very challenging tasks.
A novel geometric feature extraction method for ear recognition. We proposed a novel geometric feature extraction approach for ear image.Both the maximum and the minimum ear height lines are used to characterize the contour of outer helix.Our method achieves recognition rate of 98.33 on the USTB subset1 and of 99.6 on the IIT Delhi database.Our geometric method can be combined with the appearance approaches to improve the recognition performance. The discriminative ability of geometric features can be well supported by empirical studies in ear recognition. Recently, a number of methods have been suggested for geometric feature extraction from ear images. However, these methods usually have relatively high feature dimension or are sensitive to rotation and scale variations. In this paper, we propose a novel geometric feature extraction method to address these issues. First, our studies show that the minimum Ear Height Line (EHL) is also helpful to characterize the contour of outer helix, and the combination of maximal EHL and minimum EHL can achieve better recognition performance. Second, we further extract three ratio-based features which are robust to scale variation. Our method has the feature dimension of six, and thus is efficient in matching for real-time ear recognition. Experimental results on two popular databases, i.e. USTB subset1 and IIT Delhi, show that the proposed approach can achieve promising recognition rates of 98.33% and 99.60%, respectively.
Sparse Representation Based Fisher Discrimination Dictionary Learning for Image Classification The employed dictionary plays an important role in sparse representation or sparse coding based image reconstruction and classification, while learning dictionaries from the training data has led to state-of-the-art results in image classification tasks. However, many dictionary learning models exploit only the discriminative information in either the representation coefficients or the representation residual, which limits their performance. In this paper we present a novel dictionary learning method based on the Fisher discrimination criterion. A structured dictionary, whose atoms have correspondences to the subject class labels, is learned, with which not only the representation residual can be used to distinguish different classes, but also the representation coefficients have small within-class scatter and big between-class scatter. The classification scheme associated with the proposed Fisher discrimination dictionary learning (FDDL) model is consequently presented by exploiting the discriminative information in both the representation residual and the representation coefficients. The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Multimodal biometric system for ECG, ear and iris recognition based on local descriptors Combination of multiple information extracted from different biometric modalities in multimodal biometric recognition system aims to solve the different drawbacks encountered in a unimodal biometric system. Fusion of many biometrics has proposed such as face, fingerprint, iris…etc. Recently, electrocardiograms (ECG) have been used as a new biometric technology in unimodal and multimodal biometric recognition system. ECG provides inherent the characteristic of liveness of a person, making it hard to spoof compared to other biometric techniques. Ear biometrics present a rich and stable source of information over an acceptable period of human life. Iris biometrics have been embedded with different biometric modalities such as fingerprint, face and palm print, because of their higher accuracy and reliability. In this paper, a new multimodal biometric system based ECG-ear-iris biometrics at feature level is proposed. Preprocessing techniques including normalization and segmentation are applied to ECG, ear and iris biometrics. Then, Local texture descriptors, namely 1D-LBP (One D-Local Binary Patterns), Shifted-1D-LBP and 1D-MR-LBP (Multi-Resolution) are used to extract the important features from the ECG signal and convert the ear and iris images to a 1D signals. KNN and RBF are used for matching to classify an unknown user into the genuine or impostor. The developed system is validated using the benchmark ID-ECG and USTB1, USTB2 and AMI ear and CASIA v1 iris databases. The experimental results demonstrate that the proposed approach outperforms unimodal biometric system. A Correct Recognition Rate (CRR) of 100% is achieved with an Equal Error Rate (EER) of 0.5%.
A human ear recognition method using nonlinear curvelet feature subspace Ear is a relatively new biometric among others. Many methods have been used for ear recognition to improve the performance of ear recognition systems. In continuation of these efforts, we propose a new ear recognition method based on curvelet transform. Features of the ear are computed by applying Fast Discrete Curvelet Transform via the wrapping technique. Feature vector of each image is composed of an approximate curvelet coefficient and second coarsest level curvelet coefficients at eight different angles. k-NN (k-nearest neighbour) is utilized as a classifier. The proposed method is experimented on two ear databases from IIT Delhi. Results achieved using the proposed method on publicly available ear database are up to 97.77% which show encouraging performance.
A Skin-Color and Template Based Technique for Automatic Ear Detection This paper proposes an efficient skin-color and template based technique for automatic ear detection in a side face image. The technique first separates skin regions from non skin regions and then searches for the ear within skin regions. Ear detection process involves three major steps. First, Skin Segmentation to eliminate all non-skin pixels from the image, second Ear Localization to perform ear detection using template matching approach, and third Ear Verification to validate the ear detection using the Zernike moments based shape descriptor. To handle the detection of ears of various shapes and sizes, an ear template is created considering the ears of various shapes (triangular, round, oval and rectangular) and resized automatically to a size suitable for the detection. Proposed technique is tested on the IIT Kanpur ear database consisting of 150 side face images and gives 94% accuracy.
Factual and Counterfactual Explanations for Black Box Decision Making. The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a ...
A vector-perturbation technique for near-capacity multiantenna multiuser communication-part I: channel inversion and regularization Recent theoretical results describing the sum capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum rates of tens of bits/channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the power of the transmitted signal. This work is comprised of two parts. In this first part, we show that while the sum capacity grows linearly with the minimum of the number of antennas and users, the sum rate of channel inversion does not. This poor performance is due to the large spread in the singular values of the channel matrix. We introduce regularization to improve the condition of the inverse and maximize the signal-to-interference-plus-noise ratio at the receivers. Regularization enables linear growth and works especially well at low signal-to-noise ratios (SNRs), but as we show in the second part, an additional step is needed to achieve near-capacity performance at all SNRs.
A Nonconservative LMI Condition for Stability of Switched Systems With Guaranteed Dwell Time. Ensuring stability of switched linear systems with a guaranteed dwell time is an important problem in control systems. Several methods have been proposed in the literature to address this problem, but unfortunately they provide sufficient conditions only. This technical note proposes the use of homogeneous polynomial Lyapunov functions in the non-restrictive case where all the subsystems are Hurwitz, showing that a sufficient condition can be provided in terms of an LMI feasibility test by exploiting a key representation of polynomials. Several properties are proved for this condition, in particular that it is also necessary for a sufficiently large degree of these functions. As a result, the proposed condition provides a sequence of upper bounds of the minimum dwell time that approximate it arbitrarily well. Some examples illustrate the proposed approach.
Stable fuzzy logic control of a general class of chaotic systems This paper proposes a new approach to the stable design of fuzzy logic control systems that deal with a general class of chaotic processes. The stable design is carried out on the basis of a stability analysis theorem, which employs Lyapunov's direct method and the separate stability analysis of each rule in the fuzzy logic controller (FLC). The stability analysis theorem offers sufficient conditions for the stability of a general class of chaotic processes controlled by Takagi---Sugeno---Kang FLCs. The approach suggested in this paper is advantageous because inserting a new rule requires the fulfillment of only one of the conditions of the stability analysis theorem. Two case studies concerning the fuzzy logic control of representative chaotic systems that belong to the general class of chaotic systems are included in order to illustrate our stable design approach. A set of simulation results is given to validate the theoretical results.
A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. In this paper, a blind image watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) is proposed. In this scheme, DWT is applied on ROI (region of interest) of the medical image to get different frequency subbands of its wavelet decomposition. On the low frequency subband LL of the ROI, block-SVD is applied to get different singular matrices. A pair of elements with similar values is identified from the left singular value matrix of these selected blocks. The values of these pairs are modified using certain threshold to embed a bit of watermark content. Appropriate threshold is chosen to achieve the imperceptibility and robustness of medical image and watermark contents respectively. For authentication and identification of original medical image, one watermark image (logo) and other text watermark have been used. The watermark image provides authentication whereas the text data represents electronic patient record (EPR) for identification. At receiving end, blind recovery of both watermark contents is performed by a similar comparison scheme used during the embedding process. The proposed algorithm is applied on various groups of medical images like X-ray, CT scan and mammography. This scheme offers better visibility of watermarked image and recovery of watermark content due to DWT-SVD combination. Moreover, use of Hamming error correcting code (ECC) on EPR text bits reduces the BER and thus provides better recovery of EPR. The performance of proposed algorithm with EPR data coding by Hamming code is compared with the BCH error correcting code and it is found that later one perform better. A result analysis shows that imperceptibility of watermarked image is better as PSNR is above 43 dB and WPSNR is above 52 dB for all set of images. In addition, robustness of the scheme is better than existing scheme for similar set of medical images in terms of normalized correlation coefficient (NCC) and bit-error-rate (BER). An analysis is also carried out to verify the performance of the proposed scheme for different size of watermark contents (image and EPR data). It is observed from analysis that the proposed scheme is also appropriate for watermarking of color image. Using proposed scheme, watermark contents are extracted successfully under various noise attacks like JPEG compression, filtering, Gaussian noise, Salt and pepper noise, cropping, filtering and rotation. Performance comparison of proposed scheme with existing schemes shows proposed scheme has better robustness against different types of attacks. Moreover, the proposed scheme is also robust under set of benchmark attacks known as checkmark attacks.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.11
0.11
0.11
0.1
0.1
0.1
0.04375
0.001667
0
0
0
0
0
0
Fuzzy Logic Based Reliable and Real-Time Routing Protocol for Mobile Ad hoc Networks. MANET (mobile ad-hoc network) includes a set of wireless mobile nodes which communicate with one another without any central controls or infrastructures and they can be quickly implemented in the operational environment. One of the most significant issues in MANETs is concerned with finding a secure, safe and short route so that data can be transmitted through it. Although several routing protocols have been introduced for the network, the majority of them just consider the shortest path with the fewest number of hops. Hop criterion is considered for simple implementation and it is reliable in dynamic environments; however, queuing delay and connection delay in the intermediate nodes are not taken into consideration for selecting route in this criterion. In this paper, a fuzzy logic-based reliable routing protocol (FRRP) is proposed for MANETs which selects stable routes using fuzzy logic. It is able to optimize system efficiency. The score allocated to routes are based on four criteria: accessible bandwidth, the amount of energy of battery, the number of hops and the degree of dynamicity of nodes. The simulation results obtained from OPNET simulator version 10.5 indicate that the proposed protocol, in comparison with ad hoc on-demand distance vector (AODV) and fuzzy-based on-demand routing protocol (FBORP), was able to better improve packet delivery rate, average end-to-end delay and throughput.
High delivery rate position-based routing algorithms for 3D ad hoc networks Position-based routing algorithms use the geographic position of the nodes in a network to make the forwarding decisions. Recent research in this field primarily addresses such routing algorithms in two dimensional (2D) space. However, in real applications, nodes may be distributed in three dimensional (3D) environments. In this paper, we propose several randomized position-based routing algorithms and their combination with restricted directional flooding-based algorithms for routing in 3D environments. The first group of algorithms AB3D are extensions of previous randomized routing algorithms from 2D space to 3D space. The second group ABLAR chooses m neighbors according to a space-partition heuristic and forwards the message to all these nodes. The third group T-ABLAR-T uses progress-based routing until a local minimum is reached. The algorithm then switches to ABLAR for one step after which the algorithm switches back to the progress-based algorithm again. The fourth group AB3D-ABLAR uses an algorithm from the AB3D group until a threshold is passed in terms of number of hops. The algorithm then switches to an ABLAR algorithm. The algorithms are evaluated and compared with current routing algorithms. The simulation results on unit disk graphs (UDG) show a significant improvement in delivery rate (up to 99%) and a large reduction of the traffic.
Three-Dimensional Position-Based Adaptive Real-Time Routing Protocol for wireless sensor networks Devices for wireless sensor networks (WSN) are limited by power, and thus, routing protocols should be designed with this constraint in mind. WSNs are used in three-dimensional (3D) scenarios such as the surface of sea or lands with different levels of height. This paper presents and evaluates the Three-Dimensional Position-Based Adaptive Real-Time Routing Protocol (3DPBARP) as a novel, real-time, position-based and energy-efficient routing protocol for WSNs. 3DPBARP is a lightweight protocol that reduces the number of nodes which receive the radio frequency (RF) signal using a novel parent forwarding region (PFR) algorithm. 3DPBARP as a Geographical Routing Protocol (GRP) reduces the number of forwarding nodes and thus the traffic and packet collision in the network. A series of performance evaluations through MATLAB and Omnet++ simulations show significant improvements in network performance parameters and total energy consumption over the 3D Position-Based Routing Protocol (3DPBRP) and Directed Flooding Routing Protocol (DFRP).
Novel unequal clustering routing protocol considering energy balancing based on network partition & distance for mobile education. In Wireless Sensor Networks (WSN) of mobile education (such as mobile learning), in order to keep a better and lower energy consumption, reduce the energy hole and prolong the network life cycle, we propose a novel unequal clustering routing protocol considering energy balancing based on network partition & distance (UCNPD, which means Unequal Clustering based on Network Partition & Distance) for WSN in this paper. In the design model of this protocol, we know that all the network node data reaches the base station (BS) through the nodes near the BS, and the nodes in this area will use more energy, so we define a ring area using the BS as the center to form a circle, then we partition the network area based on the distance from node to the BS. These parts of nodes are to build connection with the BS, and the others follow the optimized clustering routing service protocol which uses a timing mechanism to elect the cluster head. It reduces the energy consumption of cluster reconstruction. Furthermore, we build unequal clusters by setting different competitive radius, which is helpful for balancing the network energy consumption. For the selection of message route, we considered all the energy of cluster head, the distances to BS and the degrees of node to reduce and balance the energy consumption. Simulation results demonstrate that the protocol can efficiently decrease the speed of the nodes death, prolong the network lifetime, and balance the energy dissipation of all nodes.
Adaptive Communication Protocols in Flying Ad Hoc Network. The flying ad hoc network (FANET) is a new paradigm of wireless communication that governs the autonomous movement of UAVs and supports UAV-to-UAV communication. A FANET can provide an effective real-time communication solution for the multiple UAV systems considering each flying UAV as a router. However, existing mobile ad hoc protocols cannot meet the needs of FANETs due to high-speed mobility a...
3D Transformative Routing for UAV Swarming Networks: A Skeleton-Guided, GPS-Free Approach A challenging issue for a three-dimensional (3D) unmanned aerial vehicle (UAV) network is addressed in this paper - how do we efficiently establish and maintain one or multiple routes among swarm regions (i.e., groups of UAVs), during the dynamic swarming process? Inspired by the human nervous system which can efficiently send brain signals to any tissue, we propose a 3D transformative routing (3D...
Footprints: history-rich tools for information foraging Inspired by Hill and Hollans original work [7], we have beendeveloping a theory of interaction history and building tools toapply this theory to navigation in a complex information space. Wehave built a series of tools - map, paths, annota- tions andsignposts - based on a physical-world navigation metaphor. Thesetools have been in use for over a year. Our user study involved acontrolled browse task and showed that users were able to get thesame amount of work done with significantly less effort.
Very Deep Convolutional Networks for Large-Scale Image Recognition. In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
Chimp optimization algorithm. •A novel optimizer called Chimp Optimization Algorithm (ChOA) is proposed.•ChOA is inspired by individual intelligence and sexual motivation of chimps.•ChOA alleviates the problems of slow convergence rate and trapping in local optima.•The four main steps of Chimp hunting are implemented.
Space-time modeling of traffic flow. This paper discusses the application of space-time autoregressive integrated moving average (STARIMA) methodology for representing traffic flow patterns. Traffic flow data are in the form of spatial time series and are collected at specific locations at constant intervals of time. Important spatial characteristics of the space-time process are incorporated in the STARIMA model through the use of weighting matrices estimated on the basis of the distances among the various locations where data are collected. These matrices distinguish the space-time approach from the vector autoregressive moving average (VARMA) methodology and enable the model builders to control the number of the parameters that have to be estimated. The proposed models can be used for short-term forecasting of space-time stationary traffic-flow processes and for assessing the impact of traffic-flow changes on other parts of the network. The three-stage iterative space-time model building procedure is illustrated using 7.5min average traffic flow data for a set of 25 loop-detectors located at roads that direct to the centre of the city of Athens, Greece. Data for two months with different traffic-flow characteristics are modelled in order to determine the stability of the parameter estimation.
A Comparative Study of Distributed Learning Environments on Learning Outcomes Advances in information and communication technologies have fueled rapid growth in the popularity of technology-supported distributed learning (DL). Many educational institutions, both academic and corporate, have undertaken initiatives that leverage the myriad of available DL technologies. Despite their rapid growth in popularity, however, alternative technologies for DL are seldom systematically evaluated for learning efficacy. Considering the increasing range of information and communication technologies available for the development of DL environments, we believe it is paramount for studies to compare the relative learning outcomes of various technologies.In this research, we employed a quasi-experimental field study approach to investigate the relative learning effectiveness of two collaborative DL environments in the context of an executive development program. We also adopted a framework of hierarchical characteristics of group support system (GSS) technologies, outlined by DeSanctis and Gallupe (1987), as the basis for characterizing the two DL environments.One DL environment employed a simple e-mail and listserv capability while the other used a sophisticated GSS (herein referred to as Beta system). Interestingly, the learning outcome of the e-mail environment was higher than the learning outcome of the more sophisticated GSS environment. The post-hoc analysis of the electronic messages indicated that the students in groups using the e-mail system exchanged a higher percentage of messages related to the learning task. The Beta system users exchanged a higher level of technology sense-making messages. No significant difference was observed in the students' satisfaction with the learning process under the two DL environments.
A Framework of Joint Mobile Energy Replenishment and Data Gathering in Wireless Rechargeable Sensor Networks Recent years have witnessed the rapid development and proliferation of techniques on improving energy efficiency for wireless sensor networks. Although these techniques can relieve the energy constraint on wireless sensors to some extent, the lifetime of wireless sensor networks is still limited by sensor batteries. Recent studies have shown that energy rechargeable sensors have the potential to provide perpetual network operations by capturing renewable energy from external environments. However, the low output of energy capturing devices can only provide intermittent recharging opportunities to support low-rate data services due to spatial-temporal, geographical or environmental factors. To provide steady and high recharging rates and achieve energy efficient data gathering from sensors, in this paper, we propose to utilize mobility for joint energy replenishment and data gathering. In particular, a multi-functional mobile entity, called SenCarin this paper, is employed, which serves not only as a mobile data collector that roams over the field to gather data via short-range communication but also as an energy transporter that charges static sensors on its migration tour via wireless energy transmissions. Taking advantages of SenCar's controlled mobility, we focus on the joint optimization of effective energy charging and high-performance data collections. We first study this problem in general networks with random topologies. We give a two-step approach for the joint design. In the first step, the locations of a subset of sensors are periodically selected as anchor points, where the SenCar will sequentially visit to charge the sensors at these locations and gather data from nearby sensors in a multi-hop fashion. To achieve a desirable balance between energy replenishment amount and data gathering latency, we provide a selection algorithm to search for a maximum number of anchor points where sensors hold the least battery energy, and meanwhile by visiting them, - he tour length of the SenCar is no more than a threshold. In the second step, we consider data gathering performance when the SenCar migrates among these anchor points. We formulate the problem into a network utility maximization problem and propose a distributed algorithm to adjust data rates at which sensors send buffered data to the SenCar, link scheduling and flow routing so as to adapt to the up-to-date energy replenishing status of sensors. Besides general networks, we also study a special scenario where sensors are regularly deployed. For this case we can provide a simplified solution of lower complexity by exploiting the symmetry of the topology. Finally, we validate the effectiveness of our approaches by extensive numerical results, which show that our solutions can achieve perpetual network operations and provide high network utility.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Higher Order Tensor Decomposition For Proportional Myoelectric Control Based On Muscle Synergies Muscle synergies have recently been utilised in myoelectric control systems. Thus far, all proposed synergy-based systems rely on matrix factorisation methods. However, this is limited in terms of task-dimensionality. Here, the potential application of higher-order tensor decomposition as a framework for proportional myoelectric control is demonstrated. A novel constrained Tucker decomposition (consTD) technique of synergy extraction is proposed for synergy-based myoelectric control model and compared with state-of-the-art matrix factorisation models. The extracted synergies were used to estimate control signals for the wrist?s Degree of Freedom (DoF) through direct projection. The consTD model was able to estimate the control signals for each DoF by utilising all data in one 3rd-order tensor. This is contrast with matrix factorisation models where data are segmented for each DoF and then the synergies often have to be realigned. Moreover, the consTD method offers more information by providing additional shared synergies, unlike matrix factorisation methods. The extracted control signals were fed to a ridge regression to estimate the wrist's kinematics based on real glove data. The Coefficient of Determination (R-2) for the reconstructed wrist position showed that the proposed consTD was higher than matrix factorisation methods. In sum, this study provides the first proof of concept for the use of higher-order tensor decomposition in proportional myoelectric control and it highlights the potential of tensors to provide an objective and direct approach to identify synergies.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
A detailed and real-time performance monitoring framework for blockchain systems. Blockchain systems, with the characteristics of decentralization, irreversibility and traceability, have attracted a lot of attentions recently. However, the current performance of blockchain is poor, which becomes a major constraint of its applications. Additionally, different blockchain systems lack standard performance monitoring approach which can automatically adapt to different systems and provide detailed and real-time performance information. To solve this problem, we propose overall performance metrics and detailed performance metrics for the users to know the exact performance in different stages of the blockchain. Then we propose a performance monitoring framework with a log-based method. It has advantages of lower overhead, more details, and better scalability than the previous performance monitoring approaches. Finally we implement the framework to monitor four well-known blockchain systems, using a set of 1,000 open-source smart contracts. The experimental results show that our framework can make detailed and real-time performance monitoring of blockchain systems. We also provide some suggestions for the future development of blockchain systems.
Fork Rate-Based Analysis of the Longest Chain Growth Time Interval of a PoW Blockchain Nakamoto's consensus protocol, which is well known for its resistance to sybil attacks by using PoW (Proof of Work), enables us to build public blockchains, such as Bitcoin. In this protocol, miners seek to extend the longest chain by solving blockhash-based cryptographic puzzles and the required time is probabilistically determined. Therefore, the distribution of the time interval affects security, performance and applications which utilize the block height information. Some researchers assumed that the time follows an exponential distribution but this assumption requires that the blockchain network is fully synchronized. To overcome this unreal scenario, the bounded delay model, in which there is an upper bound for block propagation delay on the network, was proposed. However, it is difficult to calculate the upper bound without observing delay and bandwidth on real-world network links. To solve this problem, we proposed another method to analyze the distribution of the longest chain growth time interval by using the observed fork rate. We derived a closed-form lower bound for the CDF (Cumulative Distribution Function) of the time to update the global block height. We also obtained the Pearson distance which can be used as the metric to judge whether the network is approximately synchronous or not. Finally, we conducted network simulations for comparing our lower bound with the lower bound that is based on the bounded delay model. In numerical examples, we show how the block size affects these lower bounds.
PeloPartition: Improving Blockchain Resilience to Network Partitioning Blockchain has gained considerable traction over the last few years and plays a critical role in realizing decentralized and cryptocurrency applications. A challenge that has been over-looked in prior blockchain algorithms is that they do not consider large-scale network outages and relied on the assumption of a reliable global network connectivity. In the event of a large scale network partition, forks may occur between partitioned regions. After the partition ends they will be discarded, leading to the loss of many blocks and a considerable amount of wasted work. This paper presents PeloPartition, which provides a sharding mechanism to improve blockchain's resilience to the possibility of a global internet outage. In PeloPartition we form consensus groups dynamically and consider the partitioning of the group as a hint to split the blockchain into branches and guarantee that all of them will be merged after the network is recovered. We indicate different methodologies to ensure blockchain security while partitioning occurs. Our experiments use simulations to show how this approach can improve the performance of blockchain algorithms and prevent wasted computational power during partitioning.
Identifying Impacts of Protocol and Internet Development on the Bitcoin Network Improving transaction throughput is an important challenge for Bitcoin. However, shortening the block generation interval or increasing the block size to improve throughput makes it sharing blocks within the network slower and increases the number of orphan blocks. Consequently, the security of the blockchain is sacrificed. To mitigate this, it is necessary to reduce the block propagation delay. Because of the contribution of new Bitcoin protocols and the improvements of the Internet, the block propagation delay in the Bitcoin network has been shortened in recent years. In this study, we identify impacts of compact block relay—an up-to-date Bitcoin protocol—and Internet improvement on the block propagation delay and fork rate in the Bitcoin network from 2015 to 2019. Existing measurement studies could not identify them but our simulation enables it. The experimental results reveal that compact block relay contributes to shortening the block propagation delay more than Internet improvements. The block propagation delay is reduced by 64.5% for the 50th percentile and 63.7% for the 90th percentile due to Internet improvements, and by 90.1% for the 50th percentile and by 87.6% for the 90th percentile due to compact block relay.
A Secure Sharding Protocol For Open Blockchains. Cryptocurrencies, such as Bitcoin and 250 similar alt-coins, embody at their core a blockchain protocol --- a mechanism for a distributed network of computational nodes to periodically agree on a set of new transactions. Designing a secure blockchain protocol relies on an open challenge in security, that of designing a highly-scalable agreement protocol open to manipulation by byzantine or arbitrarily malicious nodes. Bitcoin's blockchain agreement protocol exhibits security, but does not scale: it processes 3--7 transactions per second at present, irrespective of the available computation capacity at hand. In this paper, we propose a new distributed agreement protocol for permission-less blockchains called ELASTICO. ELASTICO scales transaction rates almost linearly with available computation for mining: the more the computation power in the network, the higher the number of transaction blocks selected per unit time. ELASTICO is efficient in its network messages and tolerates byzantine adversaries of up to one-fourth of the total computational power. Technically, ELASTICO uniformly partitions or parallelizes the mining network (securely) into smaller committees, each of which processes a disjoint set of transactions (or \"shards\"). While sharding is common in non-byzantine settings, ELASTICO is the first candidate for a secure sharding protocol with presence of byzantine adversaries. Our scalability experiments on Amazon EC2 with up to $1, 600$ nodes confirm ELASTICO's theoretical scaling properties.
Completely derandomized self-adaptation in evolution strategies. This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Hiding Traces of Resampling in Digital Images Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
GameFlow: a model for evaluating player enjoyment in games Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.
Adapting visual category models to new domains Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.
A Web-Based Tool For Control Engineering Teaching In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. (C) 2006 Wiley Periodicals, Inc.
Adaptive Consensus Control for a Class of Nonlinear Multiagent Time-Delay Systems Using Neural Networks Because of the complicity of consensus control of nonlinear multiagent systems in state time-delay, most of previous works focused only on linear systems with input time-delay. An adaptive neural network (NN) consensus control method for a class of nonlinear multiagent systems with state time-delay is proposed in this paper. The approximation property of radial basis function neural networks (RBFNNs) is used to neutralize the uncertain nonlinear dynamics in agents. An appropriate Lyapunov-Krasovskii functional, which is obtained from the derivative of an appropriate Lyapunov function, is used to compensate the uncertainties of unknown time delays. It is proved that our proposed approach guarantees the convergence on the basis of Lyapunov stability theory. The simulation results of a nonlinear multiagent time-delay system and a multiple collaborative manipulators system show the effectiveness of the proposed consensus control algorithm.
Inter-class sparsity based discriminative least square regression Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero–one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero–one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification.
Attitudes Towards Social Robots In Education: Enthusiast, Practical, Troubled, Sceptic, And Mindfully Positive While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented.
1.2496
0.2496
0.2496
0.1248
0.008267
0
0
0
0
0
0
0
0
0
A Strictly Predefined-Time Convergent Neural Solution to Equality- and Inequality-Constrained Time-Variant Quadratic Programming Aiming at time-variant problems solving, a special type of recurrent neural networks, termed zeroing neural network (ZNN), has been proposed, developed, and validated since 2001. Although equality-constrained time-variant quadratic programming (TVQP) has been well solved using the ZNN approach, TVQP problems with inequality constraints involved have not been satisfactorily handled by the existing ZNN models. To overcome this issue, this paper designs a ZNN model with exponential convergence for solving equality- and inequality-constrained TVQP problems. Considering a fast convergence is preferred in some time-critical applications in practice, a predefined-time stabilizer is for the first time utilized to endow the ZNN model with predefined-time convergence, leading to a predefined-time convergent ZNN (PTCZNN) model that exhibits an antecedently- and explicitly-defined convergence time. Theoretical analysis is performed with the convergence of the two ZNN models including the predefined-time convergence of the PTCZNN model rigorously proved. Validations are comparatively conducted to verify the effectiveness and superiority of the PTCZNN model in terms of convergence performance. To demonstrate the potential applications, the PTCZNN model is applied to image fusion and kinematic control of two robotic arms with joint limits considered. The efficacy and applicability of the PTCZNN model are validated by the illustrative examples. This is the first time to develop a ZNN model working as a quadratic programming solver that is applicable to kinematic control of robotic arms with joint constraints handled since the emergence of ZNNs.
On NCP-Functions In this paper we reformulate several NCP-functionsfor the nonlinear complementarityproblem (NCP) from their merit function forms and studysome important properties of these NCP-functions. We point out thatsome of these NCP-functions have all the nice properties investigated by Chen, Chen and Kanzow [2] fora modified Fischer-Burmeister function, while some other NCP-functionsmay lose one or several of these properties. We alsoprovide a modified normal map and a smoothing technique toovercome the limitation of these NCP-functions. A numerical comparisonfor the behaviour of various NCP-functions is provided.
Two k-winners-take-all networks with discontinuous activation functions. This paper presents two k-winners-take-all (k-WTA) networks with discontinuous activation functions. The k-WTA operation is first converted equivalently into linear and quadratic programming problems. Then two k-winners-take-all networks are designed based on the linear and quadratic programming formulations. The networks are theoretically guaranteed to be capable of performing the k-WTA operation in real time. Simulation results show the effectiveness and performance of the networks.
Distributed Task Allocation of Multiple Robots: A Control Perspective. The problem of dynamic task allocation in a distributed network of redundant robot manipulators for pathtracking with limited communications is investigated in this paper, where k fittest ones in a group of n redundant robot manipulators with n k are allocated to execute an object tracking task. The problem is essentially challenging in view of the interplay of manipulator kinematics and the dynam...
Coarse-to-Fine UAV Target Tracking With Deep Reinforcement Learning The aspect ratio of a target changes frequently during an unmanned aerial vehicle (UAV) tracking task, which makes the aerial tracking very challenging. Traditional trackers struggle from such a problem as they mainly focus on the scale variation issue by maintaining a certain aspect ratio. In this paper, we propose a coarse-to-fine deep scheme to address the aspect ratio variation in UAV tracking. The coarse-tracker first produces an initial estimate for the target object, then a sequence of actions are learned to fine-tune the four boundaries of the bounding box. The coarse-tracker and the fine-tracker are designed to have different action spaces and operating target. The former dominates the entire bounding box and the latter focuses on the refinement of each boundary. They are trained jointly by sharing the perception network with an end-to-end reinforcement learning architecture. Experimental results on benchmark aerial data set prove that the proposed approach outperforms existing trackers and produces significant accuracy gains in dealing with the aspect ratio variation in UAV tracking. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</italic> —During the past years, unmanned aerial vehicle (UAV) have gained much attention for both industrial and consumer uses. It is in urgent demand to endow the UAV with intelligent vision-based techniques, and the automatic target following via visual tracking methods as one of the most fundamental intelligent features could promote various applications of UAVs, such as surveillance, augmented reality, and behavior modeling. Nonetheless, the primary issue of a UAV-based tracking method is the platform itself: it is not stable, it tends to have sudden movements, it generates nonhomogeneous data (scale, angle, rotation, depth, and so on), all of them tend to change the aspect ratio of the target frequently and further increase the difficulty of object tracking. This paper aims to address the aspect ratio change (ARC) problem in UAV tracking. We present a coarse-to-fine strategy for UAV tracking. Specifically, the coarse bounding box is obtained to locate the target firstly. Then, a refinement scheme is performed on each boundary to further improve the position estimate. The tracker is proved to be effective to increase the resistance to the ARC. Such a method can be implemented on UAV to improve the target-following performance.
Design and Experimental Validation of a Distributed Cooperative Transportation Scheme Leveraging explicit communication and cooperation of multiple robots brings about multiple advantages in the solution of tasks with autonomous robotic agents. For this reason, to the end of transporting polygonal objects with a group of mobile robots, the aim of this article is to develop a fully distributed decision-making and control scheme that lets the robots cooperate as equals, without any k...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Sequence to Sequence Learning with Neural Networks. Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Chimp optimization algorithm. •A novel optimizer called Chimp Optimization Algorithm (ChOA) is proposed.•ChOA is inspired by individual intelligence and sexual motivation of chimps.•ChOA alleviates the problems of slow convergence rate and trapping in local optima.•The four main steps of Chimp hunting are implemented.
Space-time modeling of traffic flow. This paper discusses the application of space-time autoregressive integrated moving average (STARIMA) methodology for representing traffic flow patterns. Traffic flow data are in the form of spatial time series and are collected at specific locations at constant intervals of time. Important spatial characteristics of the space-time process are incorporated in the STARIMA model through the use of weighting matrices estimated on the basis of the distances among the various locations where data are collected. These matrices distinguish the space-time approach from the vector autoregressive moving average (VARMA) methodology and enable the model builders to control the number of the parameters that have to be estimated. The proposed models can be used for short-term forecasting of space-time stationary traffic-flow processes and for assessing the impact of traffic-flow changes on other parts of the network. The three-stage iterative space-time model building procedure is illustrated using 7.5min average traffic flow data for a set of 25 loop-detectors located at roads that direct to the centre of the city of Athens, Greece. Data for two months with different traffic-flow characteristics are modelled in order to determine the stability of the parameter estimation.
State resetting for bumpless switching in supervisory control In this paper the realization and implementation of a multi-controller scheme made of a finite set of linear single-input-single-output controllers, possibly having different state dimensions, is studied. The supervisory control framework is considered, namely a minimal parameter dependent realization of the set of controllers such that all controllers share the same state space is used. A specific state resetting strategy based on the behavioral approach to system theory is developed in order to master the transient upon controller switching.
An Automatic Screening Approach for Obstructive Sleep Apnea Diagnosis Based on Single-Lead Electrocardiogram Traditional approaches for obstructive sleep apnea (OSA) diagnosis are apt to using multiple channels of physiological signals to detect apnea events by dividing the signals into equal-length segments, which may lead to incorrect apnea event detection and weaken the performance of OSA diagnosis. This paper proposes an automatic-segmentation-based screening approach with the single channel of Electrocardiogram (ECG) signal for OSA subject diagnosis, and the main work of the proposed approach lies in three aspects: (i) an automatic signal segmentation algorithm is adopted for signal segmentation instead of the equal-length segmentation rule; (ii) a local median filter is improved for reduction of the unexpected RR intervals before signal segmentation; (iii) the designed OSA severity index and additional admission information of OSA suspects are plugged into support vector machine (SVM) for OSA subject diagnosis. A real clinical example from PhysioNet database is provided to validate the proposed approach and an average accuracy of 97.41% for subject diagnosis is obtained which demonstrates the effectiveness for OSA diagnosis.
Adaptive fuzzy tracking control for switched uncertain strict-feedback nonlinear systems. •Adaptive tracking control for switched strict-feedback nonlinear systems is proposed.•The generalized fuzzy hyperbolic model is used to approximate nonlinear functions.•The designed controller has fewer design parameters comparing with existing methods.
Energy harvesting algorithm considering max flow problem in wireless sensor networks. In Wireless Sensor Networks (WSNs), sensor nodes with poor energy always have bad effect on the data rate or max flow. These nodes are called bottleneck nodes. In this paper, in order to increase the max flow, we assume an energy harvesting WSNs environment to investigate the cooperation of multiple Mobile Chargers (MCs). MCs are mobile robots that use wireless charging technology to charge sensor nodes in WSNs. This means that in energy harvesting WSNs environments, sensor nodes can obtain energy replenishment by using MCs or collecting energy from nature by themselves. In our research, we use MCs to improve the energy of the sensor nodes by performing multiple rounds of unified scheduling, and finally achieve the purpose of increasing the max flow at sinks. Firstly, we model this problem as a Linear Programming (LP) to search the max flow in a round of charging scheduling and prove that the problem is NP-hard. In order to solve the problem, we propose a heuristic approach: deploying MCs in units of paths with the lowest energy node priority. To reduce the energy consumption of MCs and increase the charging efficiency, we also take the optimization of MCs’ moving distance into our consideration. Finally, we extend the method to multiple rounds of scheduling called BottleNeck. Simulation results show that Bottleneck performs well at increasing max flow.
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Automatic Fairness Testing of Neural Classifiers Through Adversarial Sampling Although deep learning has demonstrated astonishing performance in many applications, there are still concerns about its dependability. One desirable property of deep learning applications with societal impact is fairness (i.e., non-discrimination). Unfortunately, discrimination might be intrinsically embedded into the models due to the discrimination in the training data. As a countermeasure, fairness testing systemically identifies discriminatory samples, which can be used to retrain the model and improve the model’s fairness. Existing fairness testing approaches however have two major limitations. First, they only work well on traditional machine learning models and have poor performance (e.g., effectiveness and efficiency) on deep learning models. Second, they only work on simple structured (e.g., tabular) data and are not applicable for domains such as text. In this work, we bridge the gap by proposing a scalable and effective approach for systematically searching for discriminatory samples while extending existing fairness testing approaches to address a more challenging domain, i.e., text classification. Compared with state-of-the-art methods, our approach only employs lightweight procedures like gradient computation and clustering, which is significantly more scalable and effective. Experimental results show that on average, our approach explores the search space much more effectively (9.62 and 2.38 times more than the state-of-the-art methods respectively on tabular and text datasets) and generates much more discriminatory samples (24.95 and 2.68 times) within a same reasonable time. Moreover, the retrained models reduce discrimination by 57.2 and 60.2 percent respectively on average.
The Sybil Attack Large-scale peer-to-peer systems facesecurity threats from faulty or hostile remotecomputing elements. To resist these threats, manysuch systems employ redundancy. However, if asingle faulty entity can present multiple identities,it can control a substantial fraction of the system,thereby undermining this redundancy. Oneapproach to preventing these &quot;Sybil attacks&quot; is tohave a trusted agency certify identities. Thispaper shows that, without a logically centralizedauthority, Sybil...
BLEU: a method for automatic evaluation of machine translation Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Computational thinking Summary form only given. My vision for the 21st century, Computational Thinking, will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.
Fuzzy logic in control systems: fuzzy logic controller. I.
Switching between stabilizing controllers This paper deals with the problem of switching between several linear time-invariant (LTI) controllers—all of them capable of stabilizing a speci4c LTI process—in such a way that the stability of the closed-loop system is guaranteed for any switching sequence. We show that it is possible to 4nd realizations for any given family of controller transfer matrices so that the closed-loop system remains stable, no matter how we switch among the controller. The motivation for this problem is the control of complex systems where con8icting requirements make a single LTI controller unsuitable. ? 2002 Published by Elsevier Science Ltd.
Tabu Search - Part I
Bidirectional recurrent neural networks In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported
An intensive survey of fair non-repudiation protocols With the phenomenal growth of the Internet and open networks in general, security services, such as non-repudiation, become crucial to many applications. Non-repudiation services must ensure that when Alice sends some information to Bob over a network, neither Alice nor Bob can deny having participated in a part or the whole of this communication. Therefore a fair non-repudiation protocol has to generate non-repudiation of origin evidences intended to Bob, and non-repudiation of receipt evidences destined to Alice. In this paper, we clearly define the properties a fair non-repudiation protocol must respect, and give a survey of the most important non-repudiation protocols without and with trusted third party (TTP). For the later ones we discuss the evolution of the TTP's involvement and, between others, describe the most recent protocol using a transparent TTP. We also discuss some ad-hoc problems related to the management of non-repudiation evidences.
Dynamic movement and positioning of embodied agents in multiparty conversations For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, such as an agent joining the conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
An improved genetic algorithm with conditional genetic operators and its application to set-covering problem The genetic algorithm (GA) is a popular, biologically inspired optimization method. However, in the GA there is no rule of thumb to design the GA operators and select GA parameters. Instead, trial-and-error has to be applied. In this paper we present an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Because there are no crossover rate and mutation rate to be selected, the proposed improved GA can be more easily applied to a problem than the conventional genetic algorithms. The proposed improved genetic algorithm is applied to solve the set-covering problem. Experimental studies show that the improved GA produces better results over the conventional one and other methods.
Lane-level traffic estimations using microscopic traffic variables This paper proposes a novel inference method to estimate lane-level traffic flow, time occupancy and vehicle inter-arrival time on road segments where local information could not be measured and assessed directly. The main contributions of the proposed method are 1) the ability to perform lane-level estimations of traffic flow, time occupancy and vehicle inter-arrival time and 2) the ability to adapt to different traffic regimes by assessing only microscopic traffic variables. We propose a modified Kriging estimation model which explicitly takes into account both spatial and temporal variability. Performance evaluations are conducted using real-world data under different traffic regimes and it is shown that the proposed method outperforms a Kalman filter-based approach.
Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors. Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Learning To Learn Relation For Important People Detection In Still Images Humans can easily recognize the importance of people in social event images, and they always focus on the most important individuals. However, learning to learn the relation between people in an image, and inferring the most important person based on this relation, remains undeveloped. In this work, we propose a deep imPOrtance relatIon NeTwork (POINT) that combines both relation modeling and feature learning. In particular, we infer two types of interaction modules: the person-person interaction module that learns the interaction between people and the event-person interaction module that learns to describe how a person is involved in the event occurring in an image. We then estimate the importance relations among people from both interactions and encode the relation feature from the importance relations. In this way, POINT automatically learns several types of relation features in parallel, and we aggregate these relation features and the person's feature to form the importance feature for important people classification. Extensive experimental results show that our method is effective for important people detection and verify the efficacy of learning to learn relations for important people detection.
Deep Feature Learning via Structured Graph Laplacian Embedding for Person Re-Identification. •This paper is the first to formulates the structured distance relationships into the graph Laplacian form for deep feature learning.•Joint learning method is used in the framework to learn discriminative features.•The results show clear improvements on public benchmark datasets and some are the state-of-the-art.
Online Joint Multi-Metric Adaptation From Frequent Sharing-Subset Mining For Person Re-Identification Person Re-IDentification (P-RID), as an instance-level recognition problem, still remains challenging in computer vision community. Many P-RID works aim to learn faithful and discriminative features/metrics from offline training data and directly use them for the unseen online testing data. However, their performance is largely limited due to the severe data shifting issue between training and testing data. Therefore, we propose an online joint multi-metric adaptation model to adapt the offline learned P-RID models for the online data by learning a series of metrics for all the sharing-subsets. Each sharing-subset is obtained from the proposed novel frequent sharing-subset mining module and contains a group of testing samples which share strong visual similarity relationships to each other. Unlike existing online P-RID methods, our model simultaneously takes both the sample-specific discriminant and the set-based visual similarity among testing samples into consideration so that the adapted multiple metrics can refine the discriminant of all the given testing samples jointly via a multi-kernel late fusion framework. Our proposed model is generally suitable to any offline learned P-RID baselines for online boosting, the performance improvement by our model is not only verified by extensive experiments on several widely-used P-RID benchmarks (CUHK03, Market 1501, DukeMTMC-reID and MSMTI7) and state-of-the-art P-RID baselines but also guaranteed by the provided in-depth theoretical analyses.
Pose-Guided Visible Part Matching for Occluded Person ReID Occluded person re-identification is a challenging task as the appearance varies substantially with various obstacles, especially in the crowd scenario. To address this issue, we propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility in an end-to-end framework. Specifically, the proposed PVPM includes two key components: 1) pose-guided attention (PGA) method for part feature pooling that exploits more discriminative local features; 2) pose-guided visibility predictor (PVP) that estimates whether a part suffers the occlusion or not. As there are no ground truth training annotations for the occluded part, we turn to utilize the characteristic of part correspondence in positive pairs and self-mining the correspondence scores via graph matching. The generated correspondence scores are then utilized as pseudo-labels for visibility predictor (PVP). Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to state-of-the-art methods. The source codes are available at https://github.com/hh23333/PVPM
Mirrorgan: Learning Text-To-Image Generation By Redescription Generating an image from a given text description has two goals: visual realism and semantic consistency. Although significant progress has been made in generating high-quality and visually realistic images using generative adversarial networks, guaranteeing semantic consistency between the text description and visual content remains very challenging. In this paper, we address this problem by proposing a novel global-local attentive and semantic preserving text-to-image-to-text framework called MirrorGAN. MirrorGAN exploits the idea of learning text-to-image generation by redescription and consists of three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). STEM generates word- and sentence-level embeddings. GLAM has a cascaded architecture for generating target images from coarse to fine scales, leveraging both local word attention and global sentence attention to progressively enhance the diversity and semantic consistency of the generated images. STREAM seeks to regenerate the text description from the generated image, which semantically aligns with the given text description. Thorough experiments on two public benchmark datasets demonstrate the superiority of Mirror-GAN over other representative state-of-the-art methods.
Perceive Where to Focus: Learning Visibility-Aware Part-Level Features for Partial Person Re-Identification This paper considers a realistic problem in person re-identification (re-ID) task, i.e., partial re-ID. Under partial re-ID scenario, the images may contain a partial observation of a pedestrian. If we directly compare a partial pedestrian image with a holistic one, the extreme spatial misalignment significantly compromises the discriminative ability of the learned representation. We propose a Visibility-aware Part Model (VPM) for partial re-ID, which learns to perceive the visibility of regions through self-supervision. The visibility awareness allows VPM to extract region-level features and compare two images with focus on their shared regions (which are visible on both images). VPM gains two-fold benefit toward higher accuracy for partial re-ID. On the one hand, compared with learning a global feature, VPM learns region-level features and thus benefits from fine-grained information. On the other hand, with visibility awareness, VPM is capable to estimate the shared regions between two images and thus suppresses the spatial misalignment. Experimental results confirm that our method significantly improves the learned feature representation and the achieved accuracy is on par with the state of the art.
Enhanced Deep Residual Networks for Single Image Super-Resolution. Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
Scalable Person Re-Identification: A Benchmark This paper contributes a new high quality dataset for person re-identification, named \"Market-1501\". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.
An introduction to ROC analysis Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
A feature-based robust digital image watermarking scheme A robust digital image watermarking scheme that combines image feature extraction and image normalization is proposed. The goal is to resist both geometric distortion and signal processing attacks. We adopt a feature extraction method called Mexican hat wavelet scale interaction. The extracted feature points can survive a variety of attacks and be used as reference points for both watermark embedding and detection. The normalized image of an image (object) is nearly invariant with respect to rotations. As a result, the watermark detection task can be much simplified when it is applied to the normalized image. However, because image normalization is sensitive to image local variation, we apply image normalization to nonoverlapped image disks separately. The disks are centered at the extracted feature points. Several copies of a 16-bit watermark sequence are embedded in the original image to improve the robustness of watermarks. Simulation results show that our scheme can survive low-quality JPEG compression, color reduction, sharpening, Gaussian filtering, median filtering, row or column removal, shearing, rotation, local warping, cropping, and linear geometric transformations.
Millimeter Wave Cellular Wireless Networks: Potentials and Challenges. Millimeter-wave (mmW) frequencies between 30 and 300 GHz are a new frontier for cellular communication that offers the promise of orders of magnitude greater bandwidths combined with further gains via beamforming and spatial multiplexing from multielement antenna arrays. This paper surveys measurements and capacity studies to assess this technology with a focus on small cell deployments in urban e...
Step length estimation using handheld inertial sensors. In this paper a novel step length model using a handheld Micro Electrical Mechanical System (MEMS) is presented. It combines the user's step frequency and height with a set of three parameters for estimating step length. The model has been developed and trained using 12 different subjects: six men and six women. For reliable estimation of the step frequency with a handheld device, the frequency content of the handheld sensor's signal is extracted by applying the Short Time Fourier Transform (STFT) independently from the step detection process. The relationship between step and hand frequencies is analyzed for different hand's motions and sensor carrying modes. For this purpose, the frequency content of synchronized signals collected with two sensors placed in the hand and on the foot of a pedestrian has been extracted. Performance of the proposed step length model is assessed with several field tests involving 10 test subjects different from the above 12. The percentages of error over the travelled distance using universal parameters and a set of parameters calibrated for each subject are compared. The fitted solutions show an error between 2.5 and 5% of the travelled distance, which is comparable with that achieved by models proposed in the literature for body fixed sensors only.
Safe mutations for deep and recurrent neural networks through output gradients While neuroevolution (evolving neural networks) has been successful across a variety of domains from reinforcement learning, to artificial life, to evolutionary robotics, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights will likely break existing functionality. This paper proposes a solution: a family of safe mutation (SM) operators that facilitate exploration without dramatically altering network behavior or requiring additional interaction with the environment. The most effective SM variant scales the degree of mutation of each individual weight according to the sensitivity of the network's outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks, including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution.
Robot tutor and pupils’ educational ability: Teaching the times tables Research shows promising results of educational robots in language and STEM tasks. In language, more research is available, occasionally in view of individual differences in pupils’ educational ability levels, and learning seems to improve with more expressive robot behaviors. In STEM, variations in robots’ behaviors have been examined with inconclusive results and never while systematically investigating how differences in educational abilities match with different robot behaviors. We applied an autonomously tutoring robot (without tablet, partly WOz) in a 2 × 2 experiment of social vs. neutral behavior in above-average vs. below-average schoolchildren (N = 86; age 8–10 years) while rehearsing the multiplication tables on a one-to-one basis. The standard school test showed that on average, pupils significantly improved their performance even after 3 occasions of 5-min exercises. Beyond-average pupils profited most from a robot tutor, whereas those below average in multiplication benefited more from a robot that showed neutral rather than more social behavior.
1.11
0.1
0.1
0.1
0.06
0.033333
0.012
0.004167
0
0
0
0
0
0